Test Report: KVM_Linux_crio 18773

                    
                      30a9d8153d68792af1ccb4545db3a1a834f0d1ba:2024-04-29:34253
                    
                

Test fail (12/207)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-943107 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-943107 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.938666576s)

                                                
                                                
-- stdout --
	* [addons-943107] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-943107" primary control-plane node in "addons-943107" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-943107 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	* Verifying csi-hostpath-driver addon...
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-943107 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: metrics-server, cloud-spanner, yakd, storage-provisioner, ingress-dns, nvidia-device-plugin, inspektor-gadget, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 11:58:33.257243  855239 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:58:33.257363  855239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:58:33.257375  855239 out.go:304] Setting ErrFile to fd 2...
	I0429 11:58:33.257381  855239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:58:33.257599  855239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 11:58:33.258341  855239 out.go:298] Setting JSON to false
	I0429 11:58:33.259250  855239 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":74458,"bootTime":1714317455,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 11:58:33.259331  855239 start.go:139] virtualization: kvm guest
	I0429 11:58:33.262205  855239 out.go:177] * [addons-943107] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 11:58:33.263808  855239 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 11:58:33.263749  855239 notify.go:220] Checking for updates...
	I0429 11:58:33.265047  855239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:58:33.266420  855239 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 11:58:33.267902  855239 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 11:58:33.269263  855239 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 11:58:33.270633  855239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:58:33.272291  855239 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:58:33.306183  855239 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 11:58:33.307502  855239 start.go:297] selected driver: kvm2
	I0429 11:58:33.307527  855239 start.go:901] validating driver "kvm2" against <nil>
	I0429 11:58:33.307541  855239 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:58:33.308241  855239 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:58:33.308338  855239 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 11:58:33.324898  855239 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 11:58:33.325007  855239 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:58:33.325249  855239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:58:33.325298  855239 cni.go:84] Creating CNI manager for ""
	I0429 11:58:33.325312  855239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 11:58:33.325320  855239 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 11:58:33.325382  855239 start.go:340] cluster config:
	{Name:addons-943107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-943107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:58:33.325477  855239 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:58:33.327525  855239 out.go:177] * Starting "addons-943107" primary control-plane node in "addons-943107" cluster
	I0429 11:58:33.328909  855239 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:58:33.328967  855239 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 11:58:33.328980  855239 cache.go:56] Caching tarball of preloaded images
	I0429 11:58:33.329068  855239 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 11:58:33.329080  855239 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 11:58:33.329406  855239 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/config.json ...
	I0429 11:58:33.329431  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/config.json: {Name:mk9f7e9a33d9bf2d965b49cdce35ae43541052ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:58:33.329575  855239 start.go:360] acquireMachinesLock for addons-943107: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:58:33.329624  855239 start.go:364] duration metric: took 35.404µs to acquireMachinesLock for "addons-943107"
	I0429 11:58:33.329642  855239 start.go:93] Provisioning new machine with config: &{Name:addons-943107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-943107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 11:58:33.329697  855239 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 11:58:33.331494  855239 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 11:58:33.331672  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:58:33.331727  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:58:33.347431  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0429 11:58:33.347946  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:58:33.348577  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:58:33.348600  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:58:33.349029  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:58:33.349245  855239 main.go:141] libmachine: (addons-943107) Calling .GetMachineName
	I0429 11:58:33.349425  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:58:33.349574  855239 start.go:159] libmachine.API.Create for "addons-943107" (driver="kvm2")
	I0429 11:58:33.349601  855239 client.go:168] LocalClient.Create starting
	I0429 11:58:33.349639  855239 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 11:58:33.473170  855239 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 11:58:33.827103  855239 main.go:141] libmachine: Running pre-create checks...
	I0429 11:58:33.827133  855239 main.go:141] libmachine: (addons-943107) Calling .PreCreateCheck
	I0429 11:58:33.827751  855239 main.go:141] libmachine: (addons-943107) Calling .GetConfigRaw
	I0429 11:58:33.828232  855239 main.go:141] libmachine: Creating machine...
	I0429 11:58:33.828249  855239 main.go:141] libmachine: (addons-943107) Calling .Create
	I0429 11:58:33.828436  855239 main.go:141] libmachine: (addons-943107) Creating KVM machine...
	I0429 11:58:33.829743  855239 main.go:141] libmachine: (addons-943107) DBG | found existing default KVM network
	I0429 11:58:33.830557  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:33.830414  855261 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0429 11:58:33.830577  855239 main.go:141] libmachine: (addons-943107) DBG | created network xml: 
	I0429 11:58:33.830589  855239 main.go:141] libmachine: (addons-943107) DBG | <network>
	I0429 11:58:33.830596  855239 main.go:141] libmachine: (addons-943107) DBG |   <name>mk-addons-943107</name>
	I0429 11:58:33.830607  855239 main.go:141] libmachine: (addons-943107) DBG |   <dns enable='no'/>
	I0429 11:58:33.830612  855239 main.go:141] libmachine: (addons-943107) DBG |   
	I0429 11:58:33.830620  855239 main.go:141] libmachine: (addons-943107) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 11:58:33.830631  855239 main.go:141] libmachine: (addons-943107) DBG |     <dhcp>
	I0429 11:58:33.830643  855239 main.go:141] libmachine: (addons-943107) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 11:58:33.830655  855239 main.go:141] libmachine: (addons-943107) DBG |     </dhcp>
	I0429 11:58:33.830672  855239 main.go:141] libmachine: (addons-943107) DBG |   </ip>
	I0429 11:58:33.830679  855239 main.go:141] libmachine: (addons-943107) DBG |   
	I0429 11:58:33.830683  855239 main.go:141] libmachine: (addons-943107) DBG | </network>
	I0429 11:58:33.830689  855239 main.go:141] libmachine: (addons-943107) DBG | 
	I0429 11:58:33.836251  855239 main.go:141] libmachine: (addons-943107) DBG | trying to create private KVM network mk-addons-943107 192.168.39.0/24...
	I0429 11:58:33.912374  855239 main.go:141] libmachine: (addons-943107) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107 ...
	I0429 11:58:33.912412  855239 main.go:141] libmachine: (addons-943107) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 11:58:33.912435  855239 main.go:141] libmachine: (addons-943107) DBG | private KVM network mk-addons-943107 192.168.39.0/24 created
	I0429 11:58:33.912453  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:33.912311  855261 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 11:58:33.912515  855239 main.go:141] libmachine: (addons-943107) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:58:34.162934  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:34.162769  855261 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa...
	I0429 11:58:34.356661  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:34.356492  855261 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/addons-943107.rawdisk...
	I0429 11:58:34.356698  855239 main.go:141] libmachine: (addons-943107) DBG | Writing magic tar header
	I0429 11:58:34.356708  855239 main.go:141] libmachine: (addons-943107) DBG | Writing SSH key tar header
	I0429 11:58:34.356716  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:34.356654  855261 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107 ...
	I0429 11:58:34.356819  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107
	I0429 11:58:34.356844  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 11:58:34.356858  855239 main.go:141] libmachine: (addons-943107) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107 (perms=drwx------)
	I0429 11:58:34.356871  855239 main.go:141] libmachine: (addons-943107) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 11:58:34.356882  855239 main.go:141] libmachine: (addons-943107) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 11:58:34.356898  855239 main.go:141] libmachine: (addons-943107) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 11:58:34.356924  855239 main.go:141] libmachine: (addons-943107) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 11:58:34.356940  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 11:58:34.356967  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 11:58:34.356996  855239 main.go:141] libmachine: (addons-943107) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 11:58:34.357007  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 11:58:34.357022  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home/jenkins
	I0429 11:58:34.357035  855239 main.go:141] libmachine: (addons-943107) DBG | Checking permissions on dir: /home
	I0429 11:58:34.357047  855239 main.go:141] libmachine: (addons-943107) DBG | Skipping /home - not owner
	I0429 11:58:34.357061  855239 main.go:141] libmachine: (addons-943107) Creating domain...
	I0429 11:58:34.358298  855239 main.go:141] libmachine: (addons-943107) define libvirt domain using xml: 
	I0429 11:58:34.358330  855239 main.go:141] libmachine: (addons-943107) <domain type='kvm'>
	I0429 11:58:34.358339  855239 main.go:141] libmachine: (addons-943107)   <name>addons-943107</name>
	I0429 11:58:34.358352  855239 main.go:141] libmachine: (addons-943107)   <memory unit='MiB'>4000</memory>
	I0429 11:58:34.358425  855239 main.go:141] libmachine: (addons-943107)   <vcpu>2</vcpu>
	I0429 11:58:34.358450  855239 main.go:141] libmachine: (addons-943107)   <features>
	I0429 11:58:34.358456  855239 main.go:141] libmachine: (addons-943107)     <acpi/>
	I0429 11:58:34.358464  855239 main.go:141] libmachine: (addons-943107)     <apic/>
	I0429 11:58:34.358469  855239 main.go:141] libmachine: (addons-943107)     <pae/>
	I0429 11:58:34.358474  855239 main.go:141] libmachine: (addons-943107)     
	I0429 11:58:34.358481  855239 main.go:141] libmachine: (addons-943107)   </features>
	I0429 11:58:34.358487  855239 main.go:141] libmachine: (addons-943107)   <cpu mode='host-passthrough'>
	I0429 11:58:34.358495  855239 main.go:141] libmachine: (addons-943107)   
	I0429 11:58:34.358504  855239 main.go:141] libmachine: (addons-943107)   </cpu>
	I0429 11:58:34.358511  855239 main.go:141] libmachine: (addons-943107)   <os>
	I0429 11:58:34.358516  855239 main.go:141] libmachine: (addons-943107)     <type>hvm</type>
	I0429 11:58:34.358524  855239 main.go:141] libmachine: (addons-943107)     <boot dev='cdrom'/>
	I0429 11:58:34.358528  855239 main.go:141] libmachine: (addons-943107)     <boot dev='hd'/>
	I0429 11:58:34.358535  855239 main.go:141] libmachine: (addons-943107)     <bootmenu enable='no'/>
	I0429 11:58:34.358543  855239 main.go:141] libmachine: (addons-943107)   </os>
	I0429 11:58:34.358550  855239 main.go:141] libmachine: (addons-943107)   <devices>
	I0429 11:58:34.358559  855239 main.go:141] libmachine: (addons-943107)     <disk type='file' device='cdrom'>
	I0429 11:58:34.358573  855239 main.go:141] libmachine: (addons-943107)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/boot2docker.iso'/>
	I0429 11:58:34.358587  855239 main.go:141] libmachine: (addons-943107)       <target dev='hdc' bus='scsi'/>
	I0429 11:58:34.358596  855239 main.go:141] libmachine: (addons-943107)       <readonly/>
	I0429 11:58:34.358603  855239 main.go:141] libmachine: (addons-943107)     </disk>
	I0429 11:58:34.358613  855239 main.go:141] libmachine: (addons-943107)     <disk type='file' device='disk'>
	I0429 11:58:34.358619  855239 main.go:141] libmachine: (addons-943107)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 11:58:34.358627  855239 main.go:141] libmachine: (addons-943107)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/addons-943107.rawdisk'/>
	I0429 11:58:34.358635  855239 main.go:141] libmachine: (addons-943107)       <target dev='hda' bus='virtio'/>
	I0429 11:58:34.358640  855239 main.go:141] libmachine: (addons-943107)     </disk>
	I0429 11:58:34.358644  855239 main.go:141] libmachine: (addons-943107)     <interface type='network'>
	I0429 11:58:34.358652  855239 main.go:141] libmachine: (addons-943107)       <source network='mk-addons-943107'/>
	I0429 11:58:34.358657  855239 main.go:141] libmachine: (addons-943107)       <model type='virtio'/>
	I0429 11:58:34.358664  855239 main.go:141] libmachine: (addons-943107)     </interface>
	I0429 11:58:34.358669  855239 main.go:141] libmachine: (addons-943107)     <interface type='network'>
	I0429 11:58:34.358675  855239 main.go:141] libmachine: (addons-943107)       <source network='default'/>
	I0429 11:58:34.358680  855239 main.go:141] libmachine: (addons-943107)       <model type='virtio'/>
	I0429 11:58:34.358720  855239 main.go:141] libmachine: (addons-943107)     </interface>
	I0429 11:58:34.358758  855239 main.go:141] libmachine: (addons-943107)     <serial type='pty'>
	I0429 11:58:34.358775  855239 main.go:141] libmachine: (addons-943107)       <target port='0'/>
	I0429 11:58:34.358786  855239 main.go:141] libmachine: (addons-943107)     </serial>
	I0429 11:58:34.358803  855239 main.go:141] libmachine: (addons-943107)     <console type='pty'>
	I0429 11:58:34.358816  855239 main.go:141] libmachine: (addons-943107)       <target type='serial' port='0'/>
	I0429 11:58:34.358827  855239 main.go:141] libmachine: (addons-943107)     </console>
	I0429 11:58:34.358839  855239 main.go:141] libmachine: (addons-943107)     <rng model='virtio'>
	I0429 11:58:34.358857  855239 main.go:141] libmachine: (addons-943107)       <backend model='random'>/dev/random</backend>
	I0429 11:58:34.358869  855239 main.go:141] libmachine: (addons-943107)     </rng>
	I0429 11:58:34.358877  855239 main.go:141] libmachine: (addons-943107)     
	I0429 11:58:34.358889  855239 main.go:141] libmachine: (addons-943107)     
	I0429 11:58:34.358899  855239 main.go:141] libmachine: (addons-943107)   </devices>
	I0429 11:58:34.358910  855239 main.go:141] libmachine: (addons-943107) </domain>
	I0429 11:58:34.358918  855239 main.go:141] libmachine: (addons-943107) 
	I0429 11:58:34.363965  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:92:29:6d in network default
	I0429 11:58:34.364727  855239 main.go:141] libmachine: (addons-943107) Ensuring networks are active...
	I0429 11:58:34.364744  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:34.365457  855239 main.go:141] libmachine: (addons-943107) Ensuring network default is active
	I0429 11:58:34.365819  855239 main.go:141] libmachine: (addons-943107) Ensuring network mk-addons-943107 is active
	I0429 11:58:34.366429  855239 main.go:141] libmachine: (addons-943107) Getting domain xml...
	I0429 11:58:34.367351  855239 main.go:141] libmachine: (addons-943107) Creating domain...
	I0429 11:58:35.595620  855239 main.go:141] libmachine: (addons-943107) Waiting to get IP...
	I0429 11:58:35.596424  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:35.596801  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:35.596831  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:35.596778  855261 retry.go:31] will retry after 204.672352ms: waiting for machine to come up
	I0429 11:58:35.803548  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:35.804125  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:35.804164  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:35.804038  855261 retry.go:31] will retry after 346.803922ms: waiting for machine to come up
	I0429 11:58:36.152879  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:36.153294  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:36.153323  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:36.153249  855261 retry.go:31] will retry after 309.478581ms: waiting for machine to come up
	I0429 11:58:36.464862  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:36.465323  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:36.465360  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:36.465269  855261 retry.go:31] will retry after 511.038905ms: waiting for machine to come up
	I0429 11:58:36.977986  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:36.978366  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:36.978392  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:36.978318  855261 retry.go:31] will retry after 547.497252ms: waiting for machine to come up
	I0429 11:58:37.527119  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:37.527647  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:37.527692  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:37.527566  855261 retry.go:31] will retry after 579.855719ms: waiting for machine to come up
	I0429 11:58:38.109424  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:38.109970  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:38.109996  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:38.109868  855261 retry.go:31] will retry after 970.342231ms: waiting for machine to come up
	I0429 11:58:39.081825  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:39.082174  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:39.082202  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:39.082135  855261 retry.go:31] will retry after 1.386140204s: waiting for machine to come up
	I0429 11:58:40.470833  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:40.471264  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:40.471284  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:40.471216  855261 retry.go:31] will retry after 1.248843306s: waiting for machine to come up
	I0429 11:58:41.721814  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:41.722271  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:41.722304  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:41.722217  855261 retry.go:31] will retry after 1.814517167s: waiting for machine to come up
	I0429 11:58:43.539158  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:43.539622  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:43.539651  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:43.539578  855261 retry.go:31] will retry after 1.998355959s: waiting for machine to come up
	I0429 11:58:45.540991  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:45.541629  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:45.541660  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:45.541578  855261 retry.go:31] will retry after 2.619980651s: waiting for machine to come up
	I0429 11:58:48.164343  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:48.164776  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:48.164803  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:48.164732  855261 retry.go:31] will retry after 3.948644521s: waiting for machine to come up
	I0429 11:58:52.117704  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:52.118236  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find current IP address of domain addons-943107 in network mk-addons-943107
	I0429 11:58:52.118268  855239 main.go:141] libmachine: (addons-943107) DBG | I0429 11:58:52.118129  855261 retry.go:31] will retry after 3.847570469s: waiting for machine to come up
	I0429 11:58:55.968651  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:55.969204  855239 main.go:141] libmachine: (addons-943107) Found IP for machine: 192.168.39.39
	I0429 11:58:55.969258  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has current primary IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:55.969272  855239 main.go:141] libmachine: (addons-943107) Reserving static IP address...
	I0429 11:58:55.969608  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find host DHCP lease matching {name: "addons-943107", mac: "52:54:00:57:1a:36", ip: "192.168.39.39"} in network mk-addons-943107
	I0429 11:58:56.064457  855239 main.go:141] libmachine: (addons-943107) DBG | Getting to WaitForSSH function...
	I0429 11:58:56.064493  855239 main.go:141] libmachine: (addons-943107) Reserved static IP address: 192.168.39.39
	I0429 11:58:56.064507  855239 main.go:141] libmachine: (addons-943107) Waiting for SSH to be available...
	I0429 11:58:56.067561  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:56.067896  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107
	I0429 11:58:56.067930  855239 main.go:141] libmachine: (addons-943107) DBG | unable to find defined IP address of network mk-addons-943107 interface with MAC address 52:54:00:57:1a:36
	I0429 11:58:56.068168  855239 main.go:141] libmachine: (addons-943107) DBG | Using SSH client type: external
	I0429 11:58:56.068208  855239 main.go:141] libmachine: (addons-943107) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa (-rw-------)
	I0429 11:58:56.068241  855239 main.go:141] libmachine: (addons-943107) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 11:58:56.068270  855239 main.go:141] libmachine: (addons-943107) DBG | About to run SSH command:
	I0429 11:58:56.068288  855239 main.go:141] libmachine: (addons-943107) DBG | exit 0
	I0429 11:58:56.072676  855239 main.go:141] libmachine: (addons-943107) DBG | SSH cmd err, output: exit status 255: 
	I0429 11:58:56.072706  855239 main.go:141] libmachine: (addons-943107) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 11:58:56.072714  855239 main.go:141] libmachine: (addons-943107) DBG | command : exit 0
	I0429 11:58:56.072719  855239 main.go:141] libmachine: (addons-943107) DBG | err     : exit status 255
	I0429 11:58:56.072727  855239 main.go:141] libmachine: (addons-943107) DBG | output  : 
	I0429 11:58:59.073587  855239 main.go:141] libmachine: (addons-943107) DBG | Getting to WaitForSSH function...
	I0429 11:58:59.076554  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.076968  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.077006  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.077022  855239 main.go:141] libmachine: (addons-943107) DBG | Using SSH client type: external
	I0429 11:58:59.077030  855239 main.go:141] libmachine: (addons-943107) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa (-rw-------)
	I0429 11:58:59.077050  855239 main.go:141] libmachine: (addons-943107) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 11:58:59.077065  855239 main.go:141] libmachine: (addons-943107) DBG | About to run SSH command:
	I0429 11:58:59.077075  855239 main.go:141] libmachine: (addons-943107) DBG | exit 0
	I0429 11:58:59.199777  855239 main.go:141] libmachine: (addons-943107) DBG | SSH cmd err, output: <nil>: 
	I0429 11:58:59.200151  855239 main.go:141] libmachine: (addons-943107) KVM machine creation complete!
	I0429 11:58:59.200609  855239 main.go:141] libmachine: (addons-943107) Calling .GetConfigRaw
	I0429 11:58:59.238187  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:58:59.238627  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:58:59.238911  855239 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 11:58:59.238935  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:58:59.240906  855239 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 11:58:59.240928  855239 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 11:58:59.240936  855239 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 11:58:59.240945  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:58:59.244004  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.244343  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.244379  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.244521  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:58:59.244773  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.244978  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.245143  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:58:59.245341  855239 main.go:141] libmachine: Using SSH client type: native
	I0429 11:58:59.245627  855239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0429 11:58:59.245646  855239 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 11:58:59.351290  855239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:58:59.351317  855239 main.go:141] libmachine: Detecting the provisioner...
	I0429 11:58:59.351325  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:58:59.354829  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.355284  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.355318  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.355541  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:58:59.355802  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.356036  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.356249  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:58:59.356465  855239 main.go:141] libmachine: Using SSH client type: native
	I0429 11:58:59.356677  855239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0429 11:58:59.356691  855239 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 11:58:59.460946  855239 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 11:58:59.461072  855239 main.go:141] libmachine: found compatible host: buildroot
	I0429 11:58:59.461085  855239 main.go:141] libmachine: Provisioning with buildroot...
	I0429 11:58:59.461097  855239 main.go:141] libmachine: (addons-943107) Calling .GetMachineName
	I0429 11:58:59.461422  855239 buildroot.go:166] provisioning hostname "addons-943107"
	I0429 11:58:59.461453  855239 main.go:141] libmachine: (addons-943107) Calling .GetMachineName
	I0429 11:58:59.461758  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:58:59.464735  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.465129  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.465157  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.465496  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:58:59.465709  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.465912  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.466092  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:58:59.466290  855239 main.go:141] libmachine: Using SSH client type: native
	I0429 11:58:59.466483  855239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0429 11:58:59.466495  855239 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-943107 && echo "addons-943107" | sudo tee /etc/hostname
	I0429 11:58:59.589578  855239 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-943107
	
	I0429 11:58:59.589618  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:58:59.592924  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.593246  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.593276  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.593427  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:58:59.593683  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.593862  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.594000  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:58:59.594153  855239 main.go:141] libmachine: Using SSH client type: native
	I0429 11:58:59.594351  855239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0429 11:58:59.594375  855239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-943107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-943107/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-943107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:58:59.711807  855239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:58:59.711847  855239 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 11:58:59.711876  855239 buildroot.go:174] setting up certificates
	I0429 11:58:59.711894  855239 provision.go:84] configureAuth start
	I0429 11:58:59.711909  855239 main.go:141] libmachine: (addons-943107) Calling .GetMachineName
	I0429 11:58:59.712302  855239 main.go:141] libmachine: (addons-943107) Calling .GetIP
	I0429 11:58:59.715304  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.715779  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.715809  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.715974  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:58:59.718510  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.719038  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.719072  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.719258  855239 provision.go:143] copyHostCerts
	I0429 11:58:59.719352  855239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 11:58:59.719533  855239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 11:58:59.719637  855239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 11:58:59.719726  855239 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.addons-943107 san=[127.0.0.1 192.168.39.39 addons-943107 localhost minikube]
	I0429 11:58:59.897840  855239 provision.go:177] copyRemoteCerts
	I0429 11:58:59.897928  855239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:58:59.897977  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:58:59.900918  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.901228  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:58:59.901254  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:58:59.901431  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:58:59.901685  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:58:59.901842  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:58:59.902020  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:58:59.983122  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:59:00.009907  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:59:00.036765  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:59:00.063537  855239 provision.go:87] duration metric: took 351.624331ms to configureAuth
	I0429 11:59:00.063579  855239 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:59:00.063764  855239 config.go:182] Loaded profile config "addons-943107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:59:00.063857  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:00.066671  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.067007  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.067055  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.067214  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:00.067474  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.067666  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.068002  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:00.068186  855239 main.go:141] libmachine: Using SSH client type: native
	I0429 11:59:00.068391  855239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0429 11:59:00.068407  855239 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 11:59:00.348248  855239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 11:59:00.348280  855239 main.go:141] libmachine: Checking connection to Docker...
	I0429 11:59:00.348292  855239 main.go:141] libmachine: (addons-943107) Calling .GetURL
	I0429 11:59:00.349792  855239 main.go:141] libmachine: (addons-943107) DBG | Using libvirt version 6000000
	I0429 11:59:00.352330  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.352710  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.352767  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.352953  855239 main.go:141] libmachine: Docker is up and running!
	I0429 11:59:00.352975  855239 main.go:141] libmachine: Reticulating splines...
	I0429 11:59:00.352986  855239 client.go:171] duration metric: took 27.003376189s to LocalClient.Create
	I0429 11:59:00.353018  855239 start.go:167] duration metric: took 27.003443831s to libmachine.API.Create "addons-943107"
	I0429 11:59:00.353032  855239 start.go:293] postStartSetup for "addons-943107" (driver="kvm2")
	I0429 11:59:00.353049  855239 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:59:00.353073  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:00.353382  855239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:59:00.353410  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:00.355578  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.355903  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.355929  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.356138  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:00.356387  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.356539  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:00.356708  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:00.439300  855239 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:59:00.444114  855239 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:59:00.444154  855239 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 11:59:00.444257  855239 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 11:59:00.444283  855239 start.go:296] duration metric: took 91.243677ms for postStartSetup
	I0429 11:59:00.444327  855239 main.go:141] libmachine: (addons-943107) Calling .GetConfigRaw
	I0429 11:59:00.445031  855239 main.go:141] libmachine: (addons-943107) Calling .GetIP
	I0429 11:59:00.447832  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.448151  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.448186  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.448560  855239 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/config.json ...
	I0429 11:59:00.448848  855239 start.go:128] duration metric: took 27.119135672s to createHost
	I0429 11:59:00.448891  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:00.451776  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.452145  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.452174  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.452342  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:00.452609  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.452791  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.452969  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:00.453303  855239 main.go:141] libmachine: Using SSH client type: native
	I0429 11:59:00.453513  855239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0429 11:59:00.453529  855239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 11:59:00.556621  855239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391940.532906001
	
	I0429 11:59:00.556668  855239 fix.go:216] guest clock: 1714391940.532906001
	I0429 11:59:00.556681  855239 fix.go:229] Guest: 2024-04-29 11:59:00.532906001 +0000 UTC Remote: 2024-04-29 11:59:00.448866864 +0000 UTC m=+27.241160902 (delta=84.039137ms)
	I0429 11:59:00.556743  855239 fix.go:200] guest clock delta is within tolerance: 84.039137ms
	I0429 11:59:00.556750  855239 start.go:83] releasing machines lock for "addons-943107", held for 27.227116472s
	I0429 11:59:00.556783  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:00.557097  855239 main.go:141] libmachine: (addons-943107) Calling .GetIP
	I0429 11:59:00.559916  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.560300  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.560331  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.560720  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:00.561322  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:00.561544  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:00.561653  855239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:59:00.561710  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:00.561755  855239 ssh_runner.go:195] Run: cat /version.json
	I0429 11:59:00.561785  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:00.564460  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.564718  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.564830  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.564854  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.565042  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:00.565122  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:00.565150  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:00.565277  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.565350  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:00.565475  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:00.565503  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:00.565758  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:00.565776  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:00.565925  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:00.669912  855239 ssh_runner.go:195] Run: systemctl --version
	I0429 11:59:00.676320  855239 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 11:59:00.839890  855239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 11:59:00.847394  855239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:59:00.847493  855239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:59:00.865283  855239 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:59:00.865318  855239 start.go:494] detecting cgroup driver to use...
	I0429 11:59:00.865413  855239 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:59:00.885607  855239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:59:00.899984  855239 docker.go:217] disabling cri-docker service (if available) ...
	I0429 11:59:00.900073  855239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 11:59:00.914657  855239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 11:59:00.929621  855239 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 11:59:01.048083  855239 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 11:59:01.196443  855239 docker.go:233] disabling docker service ...
	I0429 11:59:01.196554  855239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 11:59:01.211997  855239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 11:59:01.226316  855239 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 11:59:01.369905  855239 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 11:59:01.495868  855239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 11:59:01.511295  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:59:01.532545  855239 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 11:59:01.532628  855239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.544697  855239 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 11:59:01.544788  855239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.557002  855239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.569105  855239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.581033  855239 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:59:01.593618  855239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.605682  855239 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.625337  855239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:59:01.637949  855239 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:59:01.649978  855239 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 11:59:01.650059  855239 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 11:59:01.667512  855239 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:59:01.680181  855239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:59:01.813636  855239 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 11:59:01.966608  855239 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 11:59:01.966707  855239 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 11:59:01.972137  855239 start.go:562] Will wait 60s for crictl version
	I0429 11:59:01.972238  855239 ssh_runner.go:195] Run: which crictl
	I0429 11:59:01.976590  855239 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:59:02.024149  855239 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 11:59:02.024254  855239 ssh_runner.go:195] Run: crio --version
	I0429 11:59:02.055429  855239 ssh_runner.go:195] Run: crio --version
	I0429 11:59:02.088217  855239 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 11:59:02.089724  855239 main.go:141] libmachine: (addons-943107) Calling .GetIP
	I0429 11:59:02.092994  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:02.093485  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:02.093522  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:02.093821  855239 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 11:59:02.098872  855239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:59:02.113579  855239 kubeadm.go:877] updating cluster {Name:addons-943107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-943107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 11:59:02.113719  855239 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:59:02.113773  855239 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:59:02.150050  855239 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 11:59:02.150129  855239 ssh_runner.go:195] Run: which lz4
	I0429 11:59:02.154569  855239 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 11:59:02.159042  855239 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 11:59:02.159090  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 11:59:03.585872  855239 crio.go:462] duration metric: took 1.431351918s to copy over tarball
	I0429 11:59:03.585960  855239 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 11:59:05.988863  855239 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.402865341s)
	I0429 11:59:05.988905  855239 crio.go:469] duration metric: took 2.402993879s to extract the tarball
	I0429 11:59:05.988918  855239 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 11:59:06.028257  855239 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:59:06.076088  855239 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 11:59:06.076120  855239 cache_images.go:84] Images are preloaded, skipping loading
	I0429 11:59:06.076129  855239 kubeadm.go:928] updating node { 192.168.39.39 8443 v1.30.0 crio true true} ...
	I0429 11:59:06.076326  855239 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-943107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-943107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:59:06.076422  855239 ssh_runner.go:195] Run: crio config
	I0429 11:59:06.124788  855239 cni.go:84] Creating CNI manager for ""
	I0429 11:59:06.124817  855239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 11:59:06.124839  855239 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 11:59:06.124877  855239 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-943107 NodeName:addons-943107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 11:59:06.125090  855239 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-943107"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 11:59:06.125196  855239 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:59:06.137399  855239 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 11:59:06.137506  855239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 11:59:06.148662  855239 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 11:59:06.167424  855239 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:59:06.186708  855239 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0429 11:59:06.207010  855239 ssh_runner.go:195] Run: grep 192.168.39.39	control-plane.minikube.internal$ /etc/hosts
	I0429 11:59:06.211164  855239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:59:06.225093  855239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:59:06.358797  855239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:59:06.381825  855239 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107 for IP: 192.168.39.39
	I0429 11:59:06.381857  855239 certs.go:194] generating shared ca certs ...
	I0429 11:59:06.381876  855239 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.382049  855239 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 11:59:06.513386  855239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt ...
	I0429 11:59:06.513429  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt: {Name:mk39b1bec1d4d16eb6fafeb0984c7e01ffe5830c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.513641  855239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key ...
	I0429 11:59:06.513656  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key: {Name:mkbe7605cd61a4b65455e73eb81a81b05113b7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.513764  855239 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 11:59:06.686124  855239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt ...
	I0429 11:59:06.686168  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt: {Name:mk4a69cb30023074d361c666567fef53b992197c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.686348  855239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key ...
	I0429 11:59:06.686359  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key: {Name:mk8776204f36aaa310b069828f51dcf285140a60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.686432  855239 certs.go:256] generating profile certs ...
	I0429 11:59:06.686520  855239 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/client.key
	I0429 11:59:06.686541  855239 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/client.crt with IP's: []
	I0429 11:59:06.834181  855239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/client.crt ...
	I0429 11:59:06.834220  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/client.crt: {Name:mk07c6137e4b93baf00cb40880dbc5c2e17fcc30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.834402  855239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/client.key ...
	I0429 11:59:06.834414  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/client.key: {Name:mk92b4b1ba1bbaed5a5d1bc05bf57a6740084630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.834486  855239 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.key.a1643f6f
	I0429 11:59:06.834507  855239 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.crt.a1643f6f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39]
	I0429 11:59:06.978215  855239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.crt.a1643f6f ...
	I0429 11:59:06.978253  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.crt.a1643f6f: {Name:mk6744f584dddd2fb811eb74924f1d60df0716b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.978423  855239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.key.a1643f6f ...
	I0429 11:59:06.978437  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.key.a1643f6f: {Name:mk4ff6f18d368edae5e633865a73df79f531663d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:06.978503  855239 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.crt.a1643f6f -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.crt
	I0429 11:59:06.978610  855239 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.key.a1643f6f -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.key
	I0429 11:59:06.978666  855239 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.key
	I0429 11:59:06.978686  855239 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.crt with IP's: []
	I0429 11:59:07.159870  855239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.crt ...
	I0429 11:59:07.159906  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.crt: {Name:mk85620ffc41629f8934863a8eb5836192dfb762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:07.160074  855239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.key ...
	I0429 11:59:07.160087  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.key: {Name:mkf8566b21c1bc51e851b337ec21bc0e0fc4f5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:07.160259  855239 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 11:59:07.160298  855239 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 11:59:07.160333  855239 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 11:59:07.160372  855239 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 11:59:07.161035  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:59:07.192333  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:59:07.218424  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:59:07.247229  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 11:59:07.276274  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 11:59:07.308080  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:59:07.334803  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:59:07.361167  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/addons-943107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 11:59:07.388300  855239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:59:07.414513  855239 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 11:59:07.433756  855239 ssh_runner.go:195] Run: openssl version
	I0429 11:59:07.440213  855239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:59:07.452983  855239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:59:07.458583  855239 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:59:07.458675  855239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:59:07.465385  855239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:59:07.478318  855239 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:59:07.483111  855239 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:59:07.483171  855239 kubeadm.go:391] StartCluster: {Name:addons-943107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-943107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:59:07.483263  855239 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 11:59:07.483371  855239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 11:59:07.522863  855239 cri.go:89] found id: ""
	I0429 11:59:07.522971  855239 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 11:59:07.534661  855239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 11:59:07.546042  855239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 11:59:07.557172  855239 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 11:59:07.557201  855239 kubeadm.go:156] found existing configuration files:
	
	I0429 11:59:07.557267  855239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 11:59:07.568662  855239 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 11:59:07.568754  855239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 11:59:07.580084  855239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 11:59:07.590505  855239 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 11:59:07.590597  855239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 11:59:07.602387  855239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 11:59:07.612757  855239 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 11:59:07.612830  855239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 11:59:07.623622  855239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 11:59:07.634185  855239 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 11:59:07.634258  855239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 11:59:07.645349  855239 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 11:59:07.715861  855239 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 11:59:07.715969  855239 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 11:59:07.864554  855239 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 11:59:07.864708  855239 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 11:59:07.864847  855239 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 11:59:08.106533  855239 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 11:59:08.108653  855239 out.go:204]   - Generating certificates and keys ...
	I0429 11:59:08.108771  855239 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 11:59:08.108850  855239 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 11:59:08.353598  855239 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 11:59:08.436399  855239 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 11:59:08.532276  855239 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 11:59:08.806247  855239 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 11:59:08.948919  855239 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 11:59:08.949131  855239 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-943107 localhost] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0429 11:59:09.138342  855239 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 11:59:09.138503  855239 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-943107 localhost] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0429 11:59:09.259195  855239 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 11:59:09.867320  855239 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 11:59:09.993085  855239 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 11:59:09.993165  855239 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 11:59:10.047239  855239 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 11:59:10.210114  855239 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 11:59:10.356479  855239 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 11:59:10.489232  855239 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 11:59:10.651230  855239 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 11:59:10.651760  855239 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 11:59:10.655881  855239 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 11:59:10.658082  855239 out.go:204]   - Booting up control plane ...
	I0429 11:59:10.658224  855239 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 11:59:10.658370  855239 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 11:59:10.658508  855239 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 11:59:10.673902  855239 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 11:59:10.675229  855239 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 11:59:10.675307  855239 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 11:59:10.811461  855239 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 11:59:10.811572  855239 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 11:59:11.312418  855239 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.32722ms
	I0429 11:59:11.312553  855239 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 11:59:16.813025  855239 kubeadm.go:309] [api-check] The API server is healthy after 5.501575158s
	I0429 11:59:16.824379  855239 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 11:59:16.847607  855239 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 11:59:16.884479  855239 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 11:59:16.884783  855239 kubeadm.go:309] [mark-control-plane] Marking the node addons-943107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 11:59:16.897867  855239 kubeadm.go:309] [bootstrap-token] Using token: dn98sn.1vhq1ml92wc4bpyx
	I0429 11:59:16.899437  855239 out.go:204]   - Configuring RBAC rules ...
	I0429 11:59:16.899609  855239 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 11:59:16.905859  855239 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 11:59:16.915061  855239 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 11:59:16.922854  855239 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 11:59:16.927213  855239 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 11:59:16.931540  855239 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 11:59:17.220936  855239 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 11:59:17.658516  855239 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 11:59:18.217775  855239 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 11:59:18.218794  855239 kubeadm.go:309] 
	I0429 11:59:18.218859  855239 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 11:59:18.218864  855239 kubeadm.go:309] 
	I0429 11:59:18.218941  855239 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 11:59:18.218948  855239 kubeadm.go:309] 
	I0429 11:59:18.218979  855239 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 11:59:18.219106  855239 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 11:59:18.219208  855239 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 11:59:18.219224  855239 kubeadm.go:309] 
	I0429 11:59:18.219314  855239 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 11:59:18.219334  855239 kubeadm.go:309] 
	I0429 11:59:18.219417  855239 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 11:59:18.219427  855239 kubeadm.go:309] 
	I0429 11:59:18.219505  855239 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 11:59:18.219612  855239 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 11:59:18.219704  855239 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 11:59:18.219722  855239 kubeadm.go:309] 
	I0429 11:59:18.219856  855239 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 11:59:18.219976  855239 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 11:59:18.219988  855239 kubeadm.go:309] 
	I0429 11:59:18.220096  855239 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dn98sn.1vhq1ml92wc4bpyx \
	I0429 11:59:18.220227  855239 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 11:59:18.220252  855239 kubeadm.go:309] 	--control-plane 
	I0429 11:59:18.220257  855239 kubeadm.go:309] 
	I0429 11:59:18.220342  855239 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 11:59:18.220350  855239 kubeadm.go:309] 
	I0429 11:59:18.220440  855239 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dn98sn.1vhq1ml92wc4bpyx \
	I0429 11:59:18.220568  855239 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 11:59:18.221219  855239 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 11:59:18.221340  855239 cni.go:84] Creating CNI manager for ""
	I0429 11:59:18.221358  855239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 11:59:18.223178  855239 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 11:59:18.224660  855239 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 11:59:18.238734  855239 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 11:59:18.259228  855239 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 11:59:18.259314  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:18.259314  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-943107 minikube.k8s.io/updated_at=2024_04_29T11_59_18_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=addons-943107 minikube.k8s.io/primary=true
	I0429 11:59:18.288888  855239 ops.go:34] apiserver oom_adj: -16
	I0429 11:59:18.419145  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:18.919908  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:19.419405  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:19.919270  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:20.419412  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:20.919159  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:21.419630  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:21.919337  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:22.419820  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:22.919559  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:23.419508  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:23.919808  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:24.419415  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:24.919891  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:25.419474  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:25.920174  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:26.419269  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:26.920258  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:27.420013  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:27.919455  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:28.419846  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:28.919349  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:29.419291  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:29.919302  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:30.419805  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:30.920011  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:31.419963  855239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:59:31.760636  855239 kubeadm.go:1107] duration metric: took 13.501398084s to wait for elevateKubeSystemPrivileges
	W0429 11:59:31.760683  855239 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 11:59:31.760692  855239 kubeadm.go:393] duration metric: took 24.277526228s to StartCluster
	I0429 11:59:31.760715  855239 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:31.760866  855239 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 11:59:31.761326  855239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:59:31.761569  855239 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 11:59:31.763662  855239 out.go:177] * Verifying Kubernetes components...
	I0429 11:59:31.761604  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 11:59:31.761621  855239 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 11:59:31.761814  855239 config.go:182] Loaded profile config "addons-943107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:59:31.765085  855239 addons.go:69] Setting yakd=true in profile "addons-943107"
	I0429 11:59:31.765127  855239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:59:31.765140  855239 addons.go:234] Setting addon yakd=true in "addons-943107"
	I0429 11:59:31.765181  855239 addons.go:69] Setting ingress-dns=true in profile "addons-943107"
	I0429 11:59:31.765203  855239 addons.go:69] Setting inspektor-gadget=true in profile "addons-943107"
	I0429 11:59:31.765217  855239 addons.go:234] Setting addon ingress-dns=true in "addons-943107"
	I0429 11:59:31.765222  855239 addons.go:69] Setting default-storageclass=true in profile "addons-943107"
	I0429 11:59:31.765254  855239 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-943107"
	I0429 11:59:31.765266  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.765310  855239 addons.go:234] Setting addon inspektor-gadget=true in "addons-943107"
	I0429 11:59:31.765192  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.765368  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.765659  855239 addons.go:69] Setting metrics-server=true in profile "addons-943107"
	I0429 11:59:31.765681  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.765697  855239 addons.go:234] Setting addon metrics-server=true in "addons-943107"
	I0429 11:59:31.765710  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.765715  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.765731  855239 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-943107"
	I0429 11:59:31.765745  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.765749  855239 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-943107"
	I0429 11:59:31.765764  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.765790  855239 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-943107"
	I0429 11:59:31.765887  855239 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-943107"
	I0429 11:59:31.765935  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.766115  855239 addons.go:69] Setting ingress=true in profile "addons-943107"
	I0429 11:59:31.766155  855239 addons.go:69] Setting registry=true in profile "addons-943107"
	I0429 11:59:31.766169  855239 addons.go:234] Setting addon ingress=true in "addons-943107"
	I0429 11:59:31.766199  855239 addons.go:234] Setting addon registry=true in "addons-943107"
	I0429 11:59:31.766211  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.766227  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.766294  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.766314  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.766535  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.766541  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.766560  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.766563  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.766579  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.766599  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.766640  855239 addons.go:69] Setting cloud-spanner=true in profile "addons-943107"
	I0429 11:59:31.766139  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.766668  855239 addons.go:234] Setting addon cloud-spanner=true in "addons-943107"
	I0429 11:59:31.765665  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.766670  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.766690  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.766689  855239 addons.go:69] Setting gcp-auth=true in profile "addons-943107"
	I0429 11:59:31.766712  855239 mustload.go:65] Loading cluster: addons-943107
	I0429 11:59:31.765720  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.766753  855239 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-943107"
	I0429 11:59:31.766780  855239 addons.go:69] Setting helm-tiller=true in profile "addons-943107"
	I0429 11:59:31.766779  855239 addons.go:69] Setting volumesnapshots=true in profile "addons-943107"
	I0429 11:59:31.766800  855239 addons.go:234] Setting addon helm-tiller=true in "addons-943107"
	I0429 11:59:31.766821  855239 addons.go:234] Setting addon volumesnapshots=true in "addons-943107"
	I0429 11:59:31.766781  855239 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-943107"
	I0429 11:59:31.766759  855239 addons.go:69] Setting storage-provisioner=true in profile "addons-943107"
	I0429 11:59:31.766870  855239 addons.go:234] Setting addon storage-provisioner=true in "addons-943107"
	I0429 11:59:31.766911  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.766914  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.767071  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.767098  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.767257  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.767295  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.767297  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.767326  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.767427  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.767934  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.767986  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.772404  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.772876  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.772917  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.788237  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0429 11:59:31.788515  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0429 11:59:31.788633  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33141
	I0429 11:59:31.789172  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.789427  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.789857  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.790083  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.790097  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.790697  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.790724  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.790800  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.790937  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.790952  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.791265  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.791655  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.791690  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.792231  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.792842  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.792936  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.803136  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34413
	I0429 11:59:31.803868  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.804586  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.804613  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.805030  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.805727  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.805754  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.805936  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0429 11:59:31.807671  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0429 11:59:31.809462  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.809512  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.809758  855239 config.go:182] Loaded profile config "addons-943107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:59:31.810151  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.810177  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.811467  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.811505  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.818438  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.818626  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0429 11:59:31.818837  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.819491  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.819519  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.819613  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.819779  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.819790  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.820375  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.820440  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.820455  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.820477  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.821029  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.821056  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.821275  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.821325  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.821689  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.822410  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.822500  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.830247  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0429 11:59:31.830812  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32979
	I0429 11:59:31.831151  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.831575  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0429 11:59:31.831620  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.832018  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.832173  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.832202  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.832305  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.832331  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.832758  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.832824  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.832889  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.832914  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.833504  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.833627  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.833687  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.835861  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.836238  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.836686  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.836723  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.839085  855239 addons.go:234] Setting addon default-storageclass=true in "addons-943107"
	I0429 11:59:31.839141  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.839581  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.839646  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.839936  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.840040  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0429 11:59:31.904551  855239 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0429 11:59:31.840567  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.845834  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0429 11:59:31.846555  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I0429 11:59:31.851265  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I0429 11:59:31.851294  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0429 11:59:31.853468  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I0429 11:59:31.860944  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0429 11:59:31.860993  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0429 11:59:31.866600  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0429 11:59:31.870061  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I0429 11:59:31.870052  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I0429 11:59:31.872227  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0429 11:59:31.874894  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I0429 11:59:31.876212  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0429 11:59:31.908143  855239 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 11:59:31.907642  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907672  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907680  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907684  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907684  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907714  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907744  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907751  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907754  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907757  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907759  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907793  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.907831  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.908083  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.909735  855239 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 11:59:31.912561  855239 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:59:31.912585  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 11:59:31.912606  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.909944  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911576  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.912711  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911591  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.912764  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911599  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.912808  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911647  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.912853  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911654  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.912905  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911664  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.912954  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911675  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913009  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911680  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913054  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911691  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913099  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911756  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913135  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911762  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913176  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911768  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913215  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.911780  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.913247  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.913330  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.913398  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.913438  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914165  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914227  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914270  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914307  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914346  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914382  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914421  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.914565  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.914614  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.914842  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.914873  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.914896  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.914955  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.915007  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.915287  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.915349  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.915408  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.915423  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.915432  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.915444  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.915474  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.915656  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.915876  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.915913  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.916956  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.917019  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.917068  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.917526  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.917570  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.918847  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.920830  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.920878  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.920936  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.922630  855239 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 11:59:31.921316  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.921688  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.922168  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.923259  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.924163  855239 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:59:31.924221  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.924953  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.925450  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.925482  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 11:59:31.925501  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.925138  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.925138  855239 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-943107"
	I0429 11:59:31.925382  855239 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 11:59:31.928486  855239 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 11:59:31.930304  855239 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 11:59:31.929667  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.933011  855239 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 11:59:31.926535  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.925402  855239 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 11:59:31.929897  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:31.929993  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.930243  855239 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 11:59:31.931659  855239 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 11:59:31.931699  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.934321  855239 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 11:59:31.934375  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 11:59:31.934404  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.935544  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 11:59:31.935630  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 11:59:31.935687  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 11:59:31.935728  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.935965  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.935993  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.936991  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.937003  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 11:59:31.937015  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.937821  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.938017  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.938224  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.938504  855239 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 11:59:31.938584  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I0429 11:59:31.938604  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.938623  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.939982  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.940049  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 11:59:31.942618  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.940161  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.940252  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.940275  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 11:59:31.940558  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.945815  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 11:59:31.944972  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.947083  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.947144  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 11:59:31.947400  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0429 11:59:31.947622  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.947751  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.948318  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.948520  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 11:59:31.948560  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0429 11:59:31.948752  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.948778  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.949058  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0429 11:59:31.949453  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.950113  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.949555  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.950154  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.950176  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.950177  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.950199  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.950082  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 11:59:31.950501  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.950688  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.950729  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.950746  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.950927  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.951107  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.951147  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39803
	I0429 11:59:31.951528  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.952127  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.951836  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 11:59:31.953396  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 11:59:31.953416  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 11:59:31.952542  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.953436  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.952564  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.952567  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.952574  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.952603  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.952748  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.953628  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.952795  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.952982  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.953765  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.953805  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.953867  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.953911  855239 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 11:59:31.953910  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.953924  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 11:59:31.953942  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.953999  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.953091  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.954228  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.954310  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.954359  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.954417  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.954337  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.954775  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.954961  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.955136  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.956132  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.956626  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.956644  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.957828  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.958146  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.958214  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.958255  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.958305  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.958359  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.958893  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.958919  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.960718  855239 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 11:59:31.959099  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.959998  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.960153  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.960997  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.962089  855239 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 11:59:31.963429  855239 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 11:59:31.963451  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 11:59:31.963483  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.962211  855239 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0429 11:59:31.962265  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.962094  855239 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:59:31.962594  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.962599  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.964654  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44427
	I0429 11:59:31.964976  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 11:59:31.965072  855239 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 11:59:31.965093  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.965438  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.966223  855239 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 11:59:31.967509  855239 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:59:31.967521  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 11:59:31.967536  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.965470  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.966116  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0429 11:59:31.966349  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 11:59:31.967680  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.966373  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.966419  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.967095  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.968016  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.968470  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.968490  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.968569  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.968894  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.969218  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.969233  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.969335  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.969773  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:31.969803  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:31.969999  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.970245  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.972021  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.972304  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.972474  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.972489  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.972772  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.972829  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.972838  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.972997  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.973051  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.973225  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.973241  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.973417  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.973492  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.973514  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.973744  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.973980  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.974075  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.974302  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.974598  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.974786  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.975162  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.975704  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.975735  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.976196  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.976273  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.976496  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	W0429 11:59:31.976513  855239 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37162->192.168.39.39:22: read: connection reset by peer
	I0429 11:59:31.977972  855239 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 11:59:31.976544  855239 retry.go:31] will retry after 205.986115ms: ssh: handshake failed: read tcp 192.168.39.1:37162->192.168.39.39:22: read: connection reset by peer
	I0429 11:59:31.976687  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.979157  855239 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 11:59:31.979188  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 11:59:31.979213  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:31.979262  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:31.982668  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:31.983079  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:31.983094  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	W0429 11:59:31.983188  855239 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37170->192.168.39.39:22: read: connection reset by peer
	I0429 11:59:31.983207  855239 retry.go:31] will retry after 181.256893ms: ssh: handshake failed: read tcp 192.168.39.1:37170->192.168.39.39:22: read: connection reset by peer
	I0429 11:59:31.983327  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:31.983575  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:31.983742  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:31.983917  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	W0429 11:59:31.984651  855239 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37176->192.168.39.39:22: read: connection reset by peer
	I0429 11:59:31.984667  855239 retry.go:31] will retry after 202.699616ms: ssh: handshake failed: read tcp 192.168.39.1:37176->192.168.39.39:22: read: connection reset by peer
	I0429 11:59:31.992476  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I0429 11:59:31.993005  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:31.993548  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:31.993576  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:31.993933  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:31.994129  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:31.995974  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:31.997853  855239 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 11:59:31.999203  855239 out.go:177]   - Using image docker.io/busybox:stable
	I0429 11:59:32.000568  855239 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:59:32.000589  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 11:59:32.000613  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:32.004648  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:32.005081  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:32.005141  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:32.005500  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:32.005720  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:32.005895  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:32.006075  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:32.169398  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:59:32.256372  855239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:59:32.256550  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 11:59:32.270203  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 11:59:32.315147  855239 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 11:59:32.315188  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 11:59:32.326649  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 11:59:32.335887  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:59:32.453860  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:59:32.491927  855239 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 11:59:32.491957  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 11:59:32.563627  855239 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 11:59:32.563656  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 11:59:32.607790  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:59:32.636707  855239 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 11:59:32.636745  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 11:59:32.659826  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 11:59:32.659856  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 11:59:32.674452  855239 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 11:59:32.674480  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 11:59:32.772703  855239 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 11:59:32.772740  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 11:59:32.772883  855239 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:59:32.772903  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 11:59:32.789266  855239 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 11:59:32.789296  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 11:59:32.828587  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:59:32.831801  855239 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 11:59:32.831835  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 11:59:32.868936  855239 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 11:59:32.868976  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 11:59:32.904187  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 11:59:32.904222  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 11:59:32.907296  855239 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 11:59:32.907323  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 11:59:32.945764  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:59:32.963322  855239 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 11:59:32.963378  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 11:59:33.032898  855239 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 11:59:33.032941  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 11:59:33.045769  855239 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 11:59:33.045803  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 11:59:33.068745  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 11:59:33.070613  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 11:59:33.070643  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 11:59:33.117116  855239 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 11:59:33.117151  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 11:59:33.236085  855239 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:59:33.236115  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 11:59:33.344609  855239 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 11:59:33.344659  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 11:59:33.369041  855239 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 11:59:33.369072  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 11:59:33.487678  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 11:59:33.487709  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 11:59:33.492310  855239 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:59:33.492335  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 11:59:33.651937  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 11:59:33.651972  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 11:59:33.716125  855239 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 11:59:33.716156  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 11:59:33.722309  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:59:33.882660  855239 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:59:33.882701  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 11:59:33.883997  855239 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 11:59:33.884023  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 11:59:33.897713  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:59:34.134293  855239 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 11:59:34.134335  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 11:59:34.145377  855239 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:59:34.145406  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 11:59:34.272673  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:59:34.281467  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:59:34.450300  855239 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 11:59:34.450340  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 11:59:34.715423  855239 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 11:59:34.715454  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 11:59:34.869775  855239 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 11:59:34.869801  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 11:59:35.052238  855239 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:59:35.052279  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 11:59:35.437929  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:59:38.989441  855239 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 11:59:38.989508  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:38.993153  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:38.993610  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:38.993647  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:38.993829  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:38.994112  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:38.994474  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:38.994679  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:39.565831  855239 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 11:59:39.828733  855239 addons.go:234] Setting addon gcp-auth=true in "addons-943107"
	I0429 11:59:39.828819  855239 host.go:66] Checking if "addons-943107" exists ...
	I0429 11:59:39.829301  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:39.829358  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:39.846833  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0429 11:59:39.847445  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:39.848033  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:39.848069  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:39.848436  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:39.848929  855239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 11:59:39.848983  855239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:59:39.866227  855239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I0429 11:59:39.866801  855239 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:59:39.867356  855239 main.go:141] libmachine: Using API Version  1
	I0429 11:59:39.867405  855239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:59:39.867872  855239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:59:39.868129  855239 main.go:141] libmachine: (addons-943107) Calling .GetState
	I0429 11:59:39.870115  855239 main.go:141] libmachine: (addons-943107) Calling .DriverName
	I0429 11:59:39.870396  855239 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 11:59:39.870424  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHHostname
	I0429 11:59:39.873623  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:39.874075  855239 main.go:141] libmachine: (addons-943107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:1a:36", ip: ""} in network mk-addons-943107: {Iface:virbr1 ExpiryTime:2024-04-29 12:58:48 +0000 UTC Type:0 Mac:52:54:00:57:1a:36 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:addons-943107 Clientid:01:52:54:00:57:1a:36}
	I0429 11:59:39.874113  855239 main.go:141] libmachine: (addons-943107) DBG | domain addons-943107 has defined IP address 192.168.39.39 and MAC address 52:54:00:57:1a:36 in network mk-addons-943107
	I0429 11:59:39.874257  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHPort
	I0429 11:59:39.874497  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHKeyPath
	I0429 11:59:39.874757  855239 main.go:141] libmachine: (addons-943107) Calling .GetSSHUsername
	I0429 11:59:39.874954  855239 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/addons-943107/id_rsa Username:docker}
	I0429 11:59:40.698577  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.529130403s)
	I0429 11:59:40.698634  855239 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.442220785s)
	I0429 11:59:40.698656  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.698666  855239 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.442080825s)
	I0429 11:59:40.698689  855239 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 11:59:40.698736  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.428504213s)
	I0429 11:59:40.698760  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.698777  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.698871  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.372160105s)
	I0429 11:59:40.698909  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.362988631s)
	I0429 11:59:40.698921  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.698935  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.698945  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.698979  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.245083439s)
	I0429 11:59:40.698994  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699003  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.698947  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699105  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.091285115s)
	I0429 11:59:40.699129  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699138  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.698671  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699241  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.870623142s)
	I0429 11:59:40.699262  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699271  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699308  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.753494366s)
	I0429 11:59:40.699339  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699345  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.63056887s)
	I0429 11:59:40.699354  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699382  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699393  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699501  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.977157312s)
	I0429 11:59:40.699525  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699536  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699612  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.801858864s)
	I0429 11:59:40.699628  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699636  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699737  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.427029606s)
	I0429 11:59:40.699754  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699767  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699780  855239 node_ready.go:35] waiting up to 6m0s for node "addons-943107" to be "Ready" ...
	I0429 11:59:40.699872  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.699892  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.699896  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.699905  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.699924  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699931  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699929  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.699941  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.699941  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.699948  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699950  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.699956  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.699959  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.699897  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700007  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.700011  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700018  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.700028  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.700035  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.700038  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.700042  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.700050  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.700056  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.700202  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700223  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700247  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.700254  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.700262  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.700269  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.700267  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700303  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.700310  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.700318  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.700321  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.700325  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.700328  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.700338  855239 addons.go:470] Verifying addon ingress=true in "addons-943107"
	I0429 11:59:40.704098  855239 out.go:177] * Verifying ingress addon...
	I0429 11:59:40.700428  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700452  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.700580  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.700604  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.701596  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.701625  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.701643  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.701666  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.701685  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.701705  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.701720  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.701739  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.701760  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.701776  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.702448  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.702473  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.699967  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.705477  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.705490  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.705498  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.705531  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.705538  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.705546  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.705575  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.705658  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.705676  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.705685  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.705741  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.705750  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.705757  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.705786  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.707414  855239 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-943107 service yakd-dashboard -n yakd-dashboard
	
	I0429 11:59:40.706456  855239 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 11:59:40.706482  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.706631  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.706635  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.706662  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.706663  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.706667  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.706682  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.706684  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.706684  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.706700  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.706707  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.706728  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.706829  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.706928  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708534  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.708566  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.708711  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708726  855239 addons.go:470] Verifying addon metrics-server=true in "addons-943107"
	I0429 11:59:40.708750  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708781  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708794  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708856  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708897  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.708987  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.709006  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.709032  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.709141  855239 addons.go:470] Verifying addon registry=true in "addons-943107"
	I0429 11:59:40.710842  855239 out.go:177] * Verifying registry addon...
	I0429 11:59:40.709583  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.709600  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.712313  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.713383  855239 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 11:59:40.730966  855239 node_ready.go:49] node "addons-943107" has status "Ready":"True"
	I0429 11:59:40.730997  855239 node_ready.go:38] duration metric: took 31.166318ms for node "addons-943107" to be "Ready" ...
	I0429 11:59:40.731012  855239 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:59:40.742805  855239 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 11:59:40.742834  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:40.743004  855239 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 11:59:40.743036  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:40.798460  855239 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8j9h" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:40.801585  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.801615  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.801925  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.801932  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.801944  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	W0429 11:59:40.802061  855239 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 11:59:40.874856  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:40.874904  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:40.875394  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:40.875418  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:40.875419  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:40.915694  855239 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8j9h" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:40.915723  855239 pod_ready.go:81] duration metric: took 117.219871ms for pod "coredns-7db6d8ff4d-b8j9h" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:40.915734  855239 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hz22v" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:40.992366  855239 pod_ready.go:92] pod "coredns-7db6d8ff4d-hz22v" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:40.992395  855239 pod_ready.go:81] duration metric: took 76.653856ms for pod "coredns-7db6d8ff4d-hz22v" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:40.992407  855239 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.064393  855239 pod_ready.go:92] pod "etcd-addons-943107" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:41.064422  855239 pod_ready.go:81] duration metric: took 72.008777ms for pod "etcd-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.064433  855239 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.088072  855239 pod_ready.go:92] pod "kube-apiserver-addons-943107" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:41.088102  855239 pod_ready.go:81] duration metric: took 23.662489ms for pod "kube-apiserver-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.088114  855239 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.118501  855239 pod_ready.go:92] pod "kube-controller-manager-addons-943107" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:41.118528  855239 pod_ready.go:81] duration metric: took 30.40721ms for pod "kube-controller-manager-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.118543  855239 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzr8x" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.229544  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:41.238229  855239 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-943107" context rescaled to 1 replicas
	I0429 11:59:41.277028  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:41.615768  855239 pod_ready.go:92] pod "kube-proxy-bzr8x" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:41.615795  855239 pod_ready.go:81] duration metric: took 497.246004ms for pod "kube-proxy-bzr8x" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.615806  855239 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.793485  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:41.793716  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:41.908568  855239 pod_ready.go:92] pod "kube-scheduler-addons-943107" in "kube-system" namespace has status "Ready":"True"
	I0429 11:59:41.908597  855239 pod_ready.go:81] duration metric: took 292.784273ms for pod "kube-scheduler-addons-943107" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:41.908609  855239 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace to be "Ready" ...
	I0429 11:59:42.009876  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.728346463s)
	W0429 11:59:42.009966  855239 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:59:42.010005  855239 retry.go:31] will retry after 159.446812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:59:42.170161  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:59:42.250086  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:42.262335  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:42.732793  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:42.735094  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:43.233114  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:43.243576  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:43.296746  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.858736576s)
	I0429 11:59:43.296756  855239 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.426332458s)
	I0429 11:59:43.296831  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:43.296849  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:43.298159  855239 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 11:59:43.297242  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:43.298207  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:43.298220  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:43.297275  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:43.298229  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:43.299543  855239 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 11:59:43.300758  855239 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 11:59:43.300778  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 11:59:43.299968  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:43.300881  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:43.300898  855239 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-943107"
	I0429 11:59:43.300003  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:43.302226  855239 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 11:59:43.304851  855239 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 11:59:43.339215  855239 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 11:59:43.339244  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:43.550966  855239 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 11:59:43.550996  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 11:59:43.626100  855239 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:59:43.626140  855239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 11:59:43.711057  855239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:59:43.717687  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:43.728178  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:43.814936  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:43.920502  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:44.214161  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:44.219503  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:44.312708  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:44.713772  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:44.718108  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:44.812082  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:44.969274  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.799046271s)
	I0429 11:59:44.969346  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:44.969360  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:44.969721  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:44.969783  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:44.969801  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:44.969816  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:44.969828  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:44.970114  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:44.970136  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:45.218332  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:45.230580  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:45.314284  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:45.736797  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:45.739130  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:45.815126  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:45.999293  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:46.013436  855239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.30232563s)
	I0429 11:59:46.013504  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:46.013516  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:46.014020  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:46.014051  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:46.014061  855239 main.go:141] libmachine: Making call to close driver server
	I0429 11:59:46.014070  855239 main.go:141] libmachine: (addons-943107) Calling .Close
	I0429 11:59:46.014380  855239 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:59:46.014438  855239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:59:46.014393  855239 main.go:141] libmachine: (addons-943107) DBG | Closing plugin on server side
	I0429 11:59:46.016564  855239 addons.go:470] Verifying addon gcp-auth=true in "addons-943107"
	I0429 11:59:46.018727  855239 out.go:177] * Verifying gcp-auth addon...
	I0429 11:59:46.021394  855239 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 11:59:46.030318  855239 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 11:59:46.030355  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:46.213686  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:46.217866  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:46.312035  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:46.525876  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:46.714735  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:46.718546  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:46.811490  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:47.024996  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:47.215098  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:47.219227  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:47.310870  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:47.525179  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:47.714580  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:47.719068  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:47.811646  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:48.025906  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:48.225623  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:48.230120  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:48.312115  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:48.417438  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:48.526624  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:48.716917  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:48.724442  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:48.817068  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:49.025887  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:49.214719  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:49.218105  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:49.311556  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:49.525510  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:49.714459  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:49.717955  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:49.811376  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:50.025595  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:50.214439  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:50.217787  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:50.314284  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:50.420036  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:50.527154  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:50.715745  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:50.719350  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:50.819179  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:51.024855  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:51.214548  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:51.217840  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:51.323370  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:51.525935  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:51.713682  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:51.724994  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:51.812318  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:52.025700  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:52.214495  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:52.221994  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:52.311823  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:52.525359  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:52.713664  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:52.718053  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:52.814348  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:52.917163  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:53.026228  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:53.215110  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:53.222713  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:53.316412  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:53.525937  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:53.714576  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:53.719637  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:53.811106  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:54.025426  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:54.222987  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:54.223049  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:54.311552  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:54.526363  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:54.713495  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:54.719065  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:54.820102  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:55.026277  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:55.213943  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:55.218780  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:55.311201  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:55.415565  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:55.526366  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:55.735868  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:55.736166  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:55.811130  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:56.025921  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:56.214517  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:56.218017  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:56.311490  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:56.526562  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:56.713693  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:56.718015  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:56.811515  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:57.025859  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:57.214464  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:57.220730  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:57.311461  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:57.416119  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 11:59:57.525778  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:57.714047  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:57.718880  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:57.812143  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:58.026304  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:58.214776  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:58.219077  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:58.311480  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:58.526298  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:58.713701  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:58.720255  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:58.811685  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:59.025932  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:59.214415  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:59.223318  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:59.310528  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:59.525532  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:59:59.713888  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:59:59.719729  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:59:59.811892  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:59:59.915633  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:00.026347  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:00.213741  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:00.218786  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:00.311781  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:00.525998  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:00.713373  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:00.720967  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:00.810993  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:01.026369  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:01.227132  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:01.230112  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:01.312792  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:01.526663  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:01.717342  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:01.721992  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:01.810839  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:02.025239  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:02.213966  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:02.219925  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:02.592596  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:02.598576  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:02.602054  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:02.793827  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:02.796498  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:02.823881  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:03.026376  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:03.216584  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:03.220138  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:03.311354  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:03.525446  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:03.731123  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:03.746652  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:03.810274  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:04.026387  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:04.213543  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:04.217810  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:04.311380  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:04.525666  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:04.713838  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:04.722101  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:04.811465  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:04.914680  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:05.025666  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:05.213475  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:05.217420  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:05.310232  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:05.525901  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:05.714168  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:05.717609  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:05.811850  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:06.475799  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:06.476200  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:06.478137  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:06.478789  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:06.526669  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:06.713869  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:06.717932  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:06.811311  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:06.915528  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:07.025405  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:07.214472  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:07.219074  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:07.314038  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:07.525755  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:07.715542  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:07.719133  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:07.811815  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:08.025638  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:08.213736  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:08.218855  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:08.311768  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:08.525697  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:08.713929  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:08.719441  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:08.813211  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:08.918973  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:09.026038  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:09.213731  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:09.218242  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:09.311414  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:09.526203  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:09.721518  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:09.721606  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:09.816245  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:10.026667  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:10.214576  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:10.218005  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:10.311475  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:10.526546  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:10.714067  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:10.718988  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:10.810442  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:11.026272  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:11.213707  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:11.218536  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:11.311981  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:11.422792  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:11.525544  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:11.713887  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:11.719343  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:11.810899  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:12.026743  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:12.214175  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:12.218787  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:12.313863  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:12.525816  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:12.714217  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:12.718752  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:12.813024  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:13.027002  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:13.214226  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:13.218532  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:13.311523  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:13.525940  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:13.713420  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:13.718020  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:13.811311  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:13.915705  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:14.026283  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:14.214240  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:14.218817  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:14.311190  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:14.526102  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:14.715810  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:14.720111  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:14.811784  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:15.025410  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:15.213619  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:15.218340  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:15.310543  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:15.526184  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:15.713738  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:15.722627  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:15.815769  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:16.025118  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:16.213759  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:16.218013  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:16.315691  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:16.415577  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:16.528949  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:16.714383  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:16.720050  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:16.820682  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:17.025716  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:17.214516  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:17.222148  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:00:17.312892  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:17.525687  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:17.714178  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:17.718972  855239 kapi.go:107] duration metric: took 37.005589903s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 12:00:17.813146  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:18.026174  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:18.213863  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:18.312332  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:18.525810  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:18.714877  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:18.815680  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:18.921026  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:19.025679  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:19.213860  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:19.314560  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:19.526589  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:19.714456  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:19.812379  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:20.026628  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:20.213619  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:20.311379  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:20.525568  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:20.716932  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:20.818932  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:21.025871  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:21.213740  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:21.310587  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:21.415246  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:21.525565  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:21.714841  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:21.811448  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:22.025960  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:22.214182  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:22.315810  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:22.525702  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:22.713814  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:22.817619  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:23.025844  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:23.213976  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:23.314445  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:23.526689  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:23.713688  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:23.810564  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:23.916068  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:24.026017  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:24.214666  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:24.312768  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:24.525705  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:24.714337  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:24.811210  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:25.025726  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:25.214436  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:25.312782  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:25.525951  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:25.713106  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:25.812070  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:25.920058  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:26.025613  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:26.214180  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:26.311136  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:26.526363  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:26.713758  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:26.811465  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:27.025885  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:27.214715  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:27.311306  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:27.526915  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:27.714081  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:27.815998  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:28.026586  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:28.214059  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:28.312503  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:28.415668  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:28.525403  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:28.714107  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:28.811485  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:29.025780  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:29.213893  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:29.311558  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:29.525438  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:29.714194  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:29.812896  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:30.026180  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:30.213707  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:30.311348  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:30.419174  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:30.526037  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:30.714203  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:30.810965  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:31.027264  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:31.213238  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:31.310712  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:31.526076  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:31.713347  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:31.815282  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:32.025850  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:32.214278  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:32.311764  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:32.421832  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:32.525842  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:32.714630  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:33.264618  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:33.270442  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:33.270502  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:33.311773  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:33.528810  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:33.714617  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:33.810972  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:34.026577  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:34.213737  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:34.311719  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:34.526898  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:34.714205  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:34.811692  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:34.915965  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:35.025630  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:35.213894  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:35.318861  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:35.526352  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:35.713490  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:35.812946  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:36.026275  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:36.213805  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:36.310975  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:36.525401  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:37.097353  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:37.101415  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:37.101955  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:37.102357  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:37.214507  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:37.312418  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:37.525267  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:37.718444  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:37.812425  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:38.025961  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:38.215866  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:38.311121  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:38.525771  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:38.714577  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:38.811714  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:39.025952  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:39.213947  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:39.310614  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:39.418637  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:39.526072  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:39.713294  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:39.811328  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:40.025573  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:40.213617  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:40.311137  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:40.534622  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:40.716132  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:40.830681  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:41.026955  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:41.213584  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:41.315098  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:41.525935  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:41.713712  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:41.827732  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:42.391405  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:42.391961  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:42.398285  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:42.398630  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:42.527406  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:42.715165  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:42.811307  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:43.024779  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:43.214071  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:43.311504  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:43.525389  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:43.714140  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:43.811129  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:44.031330  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:44.216478  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:44.311945  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:44.419095  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:44.526032  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:44.714619  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:44.816389  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:45.027237  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:45.213933  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:45.312747  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:45.528272  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:45.714764  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:45.812132  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:46.025804  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:46.214428  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:46.315913  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:46.424832  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:46.533221  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:46.714245  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:46.815950  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:47.031029  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:47.213464  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:47.311066  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:47.526147  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:47.713232  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:47.813700  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:48.025968  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:48.215161  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:48.318215  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:48.526159  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:48.713929  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:48.811486  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:48.916476  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:49.026013  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:49.212955  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:49.318304  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:49.526482  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:49.717909  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:50.046620  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:50.047001  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:50.213729  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:50.310998  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:50.525342  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:50.714000  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:50.810985  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:51.025398  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:51.213767  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:51.310993  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:51.416830  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:51.525882  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:51.716186  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:51.814583  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:52.026084  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:52.213557  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:52.312592  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:52.525927  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:52.713942  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:52.814684  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:53.026226  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:53.213664  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:53.314741  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:53.424865  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:53.527769  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:53.713952  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:53.811720  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:54.025041  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:54.216250  855239 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:00:54.311718  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:54.526772  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:54.714432  855239 kapi.go:107] duration metric: took 1m14.007973301s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 12:00:54.814160  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:55.027613  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:55.312179  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:55.525455  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:55.810941  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:55.916384  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:56.026087  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:56.314221  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:56.525548  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:56.811906  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:57.025480  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:00:57.316597  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:57.553402  855239 kapi.go:107] duration metric: took 1m11.532006849s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 12:00:57.555177  855239 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-943107 cluster.
	I0429 12:00:57.556663  855239 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 12:00:57.558048  855239 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 12:00:57.818487  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:57.929452  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:00:58.313616  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:58.812494  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:00:59.312793  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:00.114425  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:00.126902  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:00.311518  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:00.811770  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:01.310740  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:01.817745  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:02.311451  855239 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:01:02.415516  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:02.811570  855239 kapi.go:107] duration metric: took 1m19.50671548s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 12:01:02.814792  855239 out.go:177] * Enabled addons: metrics-server, cloud-spanner, yakd, storage-provisioner, ingress-dns, nvidia-device-plugin, inspektor-gadget, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0429 12:01:02.816209  855239 addons.go:505] duration metric: took 1m31.054582473s for enable addons: enabled=[metrics-server cloud-spanner yakd storage-provisioner ingress-dns nvidia-device-plugin inspektor-gadget helm-tiller storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0429 12:01:04.416003  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:06.446775  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:08.916148  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:10.916305  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:12.916404  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:15.417559  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:17.915646  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:19.917358  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:22.416241  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:24.418487  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:26.916292  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:29.417363  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:31.918430  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:34.416191  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:36.916107  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:38.916310  855239 pod_ready.go:102] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"False"
	I0429 12:01:39.916492  855239 pod_ready.go:92] pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace has status "Ready":"True"
	I0429 12:01:39.916525  855239 pod_ready.go:81] duration metric: took 1m58.007910168s for pod "metrics-server-c59844bb4-9vzhv" in "kube-system" namespace to be "Ready" ...
	I0429 12:01:39.916537  855239 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vmp66" in "kube-system" namespace to be "Ready" ...
	I0429 12:01:39.923064  855239 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vmp66" in "kube-system" namespace has status "Ready":"True"
	I0429 12:01:39.923101  855239 pod_ready.go:81] duration metric: took 6.554172ms for pod "nvidia-device-plugin-daemonset-vmp66" in "kube-system" namespace to be "Ready" ...
	I0429 12:01:39.923127  855239 pod_ready.go:38] duration metric: took 1m59.192090909s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:01:39.923156  855239 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:01:39.923216  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 12:01:39.923308  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 12:01:39.975179  855239 cri.go:89] found id: "a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84"
	I0429 12:01:39.975255  855239 cri.go:89] found id: ""
	I0429 12:01:39.975265  855239 logs.go:276] 1 containers: [a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84]
	I0429 12:01:39.975338  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:39.981069  855239 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 12:01:39.981161  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 12:01:40.026925  855239 cri.go:89] found id: "be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374"
	I0429 12:01:40.026956  855239 cri.go:89] found id: ""
	I0429 12:01:40.026967  855239 logs.go:276] 1 containers: [be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374]
	I0429 12:01:40.027039  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:40.032700  855239 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 12:01:40.032805  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 12:01:40.085315  855239 cri.go:89] found id: "0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462"
	I0429 12:01:40.085351  855239 cri.go:89] found id: ""
	I0429 12:01:40.085363  855239 logs.go:276] 1 containers: [0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462]
	I0429 12:01:40.085434  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:40.092112  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 12:01:40.092213  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 12:01:40.136191  855239 cri.go:89] found id: "d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb"
	I0429 12:01:40.136223  855239 cri.go:89] found id: ""
	I0429 12:01:40.136233  855239 logs.go:276] 1 containers: [d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb]
	I0429 12:01:40.136396  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:40.141914  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 12:01:40.142010  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 12:01:40.185651  855239 cri.go:89] found id: "f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738"
	I0429 12:01:40.185679  855239 cri.go:89] found id: ""
	I0429 12:01:40.185691  855239 logs.go:276] 1 containers: [f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738]
	I0429 12:01:40.185764  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:40.191391  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 12:01:40.191498  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 12:01:40.233822  855239 cri.go:89] found id: "93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89"
	I0429 12:01:40.233855  855239 cri.go:89] found id: ""
	I0429 12:01:40.233865  855239 logs.go:276] 1 containers: [93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89]
	I0429 12:01:40.233946  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:40.238942  855239 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 12:01:40.239035  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 12:01:40.282880  855239 cri.go:89] found id: ""
	I0429 12:01:40.282914  855239 logs.go:276] 0 containers: []
	W0429 12:01:40.282923  855239 logs.go:278] No container was found matching "kindnet"
	I0429 12:01:40.282934  855239 logs.go:123] Gathering logs for describe nodes ...
	I0429 12:01:40.282952  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 12:01:40.449141  855239 logs.go:123] Gathering logs for kube-scheduler [d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb] ...
	I0429 12:01:40.449201  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb"
	I0429 12:01:40.503282  855239 logs.go:123] Gathering logs for kube-proxy [f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738] ...
	I0429 12:01:40.503345  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738"
	I0429 12:01:40.547751  855239 logs.go:123] Gathering logs for CRI-O ...
	I0429 12:01:40.547801  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 12:01:41.331373  855239 logs.go:123] Gathering logs for container status ...
	I0429 12:01:41.331437  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 12:01:41.383494  855239 logs.go:123] Gathering logs for kubelet ...
	I0429 12:01:41.383555  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 12:01:41.439625  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: W0429 11:59:32.154586    1277 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.439876  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: E0429 11:59:32.154653    1277 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.440040  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: W0429 11:59:32.154701    1277 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.440265  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: E0429 11:59:32.154710    1277 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.445438  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: W0429 11:59:38.706811    1277 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.445595  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: E0429 11:59:38.706850    1277 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.445732  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: W0429 11:59:38.706881    1277 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.445885  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: E0429 11:59:38.706891    1277 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.447555  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:39 addons-943107 kubelet[1277]: W0429 11:59:39.708670    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.447710  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:39 addons-943107 kubelet[1277]: E0429 11:59:39.708708    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.457685  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:45 addons-943107 kubelet[1277]: W0429 11:59:45.969556    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.457852  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:45 addons-943107 kubelet[1277]: E0429 11:59:45.969607    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	I0429 12:01:41.480710  855239 logs.go:123] Gathering logs for kube-apiserver [a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84] ...
	I0429 12:01:41.480757  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84"
	I0429 12:01:41.547001  855239 logs.go:123] Gathering logs for etcd [be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374] ...
	I0429 12:01:41.547055  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374"
	I0429 12:01:41.628587  855239 logs.go:123] Gathering logs for coredns [0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462] ...
	I0429 12:01:41.628638  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462"
	I0429 12:01:41.676073  855239 logs.go:123] Gathering logs for kube-controller-manager [93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89] ...
	I0429 12:01:41.676122  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89"
	I0429 12:01:41.751927  855239 logs.go:123] Gathering logs for dmesg ...
	I0429 12:01:41.751977  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 12:01:41.769353  855239 out.go:304] Setting ErrFile to fd 2...
	I0429 12:01:41.769387  855239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 12:01:41.769461  855239 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 12:01:41.769478  855239 out.go:239]   Apr 29 11:59:38 addons-943107 kubelet[1277]: E0429 11:59:38.706891    1277 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	  Apr 29 11:59:38 addons-943107 kubelet[1277]: E0429 11:59:38.706891    1277 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.769489  855239 out.go:239]   Apr 29 11:59:39 addons-943107 kubelet[1277]: W0429 11:59:39.708670    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	  Apr 29 11:59:39 addons-943107 kubelet[1277]: W0429 11:59:39.708670    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.769499  855239 out.go:239]   Apr 29 11:59:39 addons-943107 kubelet[1277]: E0429 11:59:39.708708    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	  Apr 29 11:59:39 addons-943107 kubelet[1277]: E0429 11:59:39.708708    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.769509  855239 out.go:239]   Apr 29 11:59:45 addons-943107 kubelet[1277]: W0429 11:59:45.969556    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	  Apr 29 11:59:45 addons-943107 kubelet[1277]: W0429 11:59:45.969556    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	W0429 12:01:41.769517  855239 out.go:239]   Apr 29 11:59:45 addons-943107 kubelet[1277]: E0429 11:59:45.969607    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	  Apr 29 11:59:45 addons-943107 kubelet[1277]: E0429 11:59:45.969607    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	I0429 12:01:41.769524  855239 out.go:304] Setting ErrFile to fd 2...
	I0429 12:01:41.769530  855239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:01:51.770644  855239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:01:51.798358  855239 api_server.go:72] duration metric: took 2m20.036747204s to wait for apiserver process to appear ...
	I0429 12:01:51.798404  855239 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:01:51.798460  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 12:01:51.798536  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 12:01:51.844328  855239 cri.go:89] found id: "a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84"
	I0429 12:01:51.844357  855239 cri.go:89] found id: ""
	I0429 12:01:51.844366  855239 logs.go:276] 1 containers: [a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84]
	I0429 12:01:51.844428  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:51.849774  855239 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 12:01:51.849848  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 12:01:51.895651  855239 cri.go:89] found id: "be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374"
	I0429 12:01:51.895677  855239 cri.go:89] found id: ""
	I0429 12:01:51.895685  855239 logs.go:276] 1 containers: [be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374]
	I0429 12:01:51.895752  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:51.900599  855239 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 12:01:51.900736  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 12:01:51.950969  855239 cri.go:89] found id: "0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462"
	I0429 12:01:51.950995  855239 cri.go:89] found id: ""
	I0429 12:01:51.951003  855239 logs.go:276] 1 containers: [0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462]
	I0429 12:01:51.951081  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:51.955820  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 12:01:51.955909  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 12:01:52.011824  855239 cri.go:89] found id: "d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb"
	I0429 12:01:52.011865  855239 cri.go:89] found id: ""
	I0429 12:01:52.011877  855239 logs.go:276] 1 containers: [d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb]
	I0429 12:01:52.011957  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:52.017980  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 12:01:52.018062  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 12:01:52.064922  855239 cri.go:89] found id: "f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738"
	I0429 12:01:52.064965  855239 cri.go:89] found id: ""
	I0429 12:01:52.064978  855239 logs.go:276] 1 containers: [f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738]
	I0429 12:01:52.065041  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:52.069763  855239 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 12:01:52.069864  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 12:01:52.119573  855239 cri.go:89] found id: "93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89"
	I0429 12:01:52.119601  855239 cri.go:89] found id: ""
	I0429 12:01:52.119609  855239 logs.go:276] 1 containers: [93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89]
	I0429 12:01:52.119680  855239 ssh_runner.go:195] Run: which crictl
	I0429 12:01:52.124150  855239 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 12:01:52.124265  855239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 12:01:52.173538  855239 cri.go:89] found id: ""
	I0429 12:01:52.173583  855239 logs.go:276] 0 containers: []
	W0429 12:01:52.173595  855239 logs.go:278] No container was found matching "kindnet"
	I0429 12:01:52.173613  855239 logs.go:123] Gathering logs for kube-proxy [f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738] ...
	I0429 12:01:52.173638  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f790655986b0d7207dded2789ac945e937a79ab5f5ed611615718775fdbd6738"
	I0429 12:01:52.214136  855239 logs.go:123] Gathering logs for kube-controller-manager [93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89] ...
	I0429 12:01:52.214188  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93dcac1887248edee29b9ab3f4aa8cd1311f573b5999f0b5c36fd8b758fc1d89"
	I0429 12:01:52.277472  855239 logs.go:123] Gathering logs for container status ...
	I0429 12:01:52.277531  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 12:01:52.335810  855239 logs.go:123] Gathering logs for dmesg ...
	I0429 12:01:52.335869  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 12:01:52.353059  855239 logs.go:123] Gathering logs for describe nodes ...
	I0429 12:01:52.353101  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 12:01:52.488525  855239 logs.go:123] Gathering logs for etcd [be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374] ...
	I0429 12:01:52.488574  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be825776c1734a1d06a5e8eddefd31a60dda358e4e6f19536cf93c7d4de23374"
	I0429 12:01:52.551526  855239 logs.go:123] Gathering logs for coredns [0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462] ...
	I0429 12:01:52.551581  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed95318fd1c00686d0a44d5e98ec637d67a3bd41eddd81812cdba5389dbf462"
	I0429 12:01:52.601175  855239 logs.go:123] Gathering logs for kubelet ...
	I0429 12:01:52.601237  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 12:01:52.642404  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: W0429 11:59:32.154586    1277 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.642582  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: E0429 11:59:32.154653    1277 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.642717  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: W0429 11:59:32.154701    1277 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.642872  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:32 addons-943107 kubelet[1277]: E0429 11:59:32.154710    1277 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.648150  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: W0429 11:59:38.706811    1277 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.648321  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: E0429 11:59:38.706850    1277 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.648454  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: W0429 11:59:38.706881    1277 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.648600  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:38 addons-943107 kubelet[1277]: E0429 11:59:38.706891    1277 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.650166  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:39 addons-943107 kubelet[1277]: W0429 11:59:39.708670    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.650312  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:39 addons-943107 kubelet[1277]: E0429 11:59:39.708708    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-943107" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.659990  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:45 addons-943107 kubelet[1277]: W0429 11:59:45.969556    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	W0429 12:01:52.660152  855239 logs.go:138] Found kubelet problem: Apr 29 11:59:45 addons-943107 kubelet[1277]: E0429 11:59:45.969607    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-943107" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-943107' and this object
	I0429 12:01:52.683521  855239 logs.go:123] Gathering logs for kube-apiserver [a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84] ...
	I0429 12:01:52.683577  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9a33c146c7b3d4d87881ea442cc92ca36196613ef570c73306c2ef4f2d9ab84"
	I0429 12:01:52.734489  855239 logs.go:123] Gathering logs for kube-scheduler [d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb] ...
	I0429 12:01:52.734544  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e1769048948badc7e0ff1d2495946490e520615ed12ce230664429ff8900fb"
	I0429 12:01:52.786132  855239 logs.go:123] Gathering logs for CRI-O ...
	I0429 12:01:52.786188  855239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-linux-amd64 start -p addons-943107 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image rm gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image rm gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr: (2.325246578s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls
functional_test.go:402: expected "gcr.io/google-containers/addon-resizer:functional-341155" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 node stop m02 -v=7 --alsologtostderr
E0429 12:52:00.214898  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:52:41.175935  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.51710203s)

                                                
                                                
-- stdout --
	* Stopping node "ha-212075-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:51:43.771748  874259 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:51:43.772073  874259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:51:43.772087  874259 out.go:304] Setting ErrFile to fd 2...
	I0429 12:51:43.772094  874259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:51:43.772359  874259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:51:43.772709  874259 mustload.go:65] Loading cluster: ha-212075
	I0429 12:51:43.773264  874259 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:51:43.773296  874259 stop.go:39] StopHost: ha-212075-m02
	I0429 12:51:43.773826  874259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:51:43.773872  874259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:51:43.790743  874259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37941
	I0429 12:51:43.791328  874259 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:51:43.792070  874259 main.go:141] libmachine: Using API Version  1
	I0429 12:51:43.792098  874259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:51:43.792427  874259 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:51:43.795045  874259 out.go:177] * Stopping node "ha-212075-m02"  ...
	I0429 12:51:43.796735  874259 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 12:51:43.796784  874259 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:51:43.797214  874259 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 12:51:43.797262  874259 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:51:43.800786  874259 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:51:43.801273  874259 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:51:43.801330  874259 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:51:43.801499  874259 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:51:43.801763  874259 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:51:43.801940  874259 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:51:43.802098  874259 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:51:43.895701  874259 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 12:51:43.950867  874259 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 12:51:44.006898  874259 main.go:141] libmachine: Stopping "ha-212075-m02"...
	I0429 12:51:44.006934  874259 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:51:44.008680  874259 main.go:141] libmachine: (ha-212075-m02) Calling .Stop
	I0429 12:51:44.012847  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 0/120
	I0429 12:51:45.014556  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 1/120
	I0429 12:51:46.016273  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 2/120
	I0429 12:51:47.017631  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 3/120
	I0429 12:51:48.019182  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 4/120
	I0429 12:51:49.020566  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 5/120
	I0429 12:51:50.022453  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 6/120
	I0429 12:51:51.024794  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 7/120
	I0429 12:51:52.026110  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 8/120
	I0429 12:51:53.027861  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 9/120
	I0429 12:51:54.030446  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 10/120
	I0429 12:51:55.032492  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 11/120
	I0429 12:51:56.034114  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 12/120
	I0429 12:51:57.036630  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 13/120
	I0429 12:51:58.038318  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 14/120
	I0429 12:51:59.040845  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 15/120
	I0429 12:52:00.042445  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 16/120
	I0429 12:52:01.044046  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 17/120
	I0429 12:52:02.046199  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 18/120
	I0429 12:52:03.047598  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 19/120
	I0429 12:52:04.049658  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 20/120
	I0429 12:52:05.051468  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 21/120
	I0429 12:52:06.052958  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 22/120
	I0429 12:52:07.054421  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 23/120
	I0429 12:52:08.056309  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 24/120
	I0429 12:52:09.058899  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 25/120
	I0429 12:52:10.061126  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 26/120
	I0429 12:52:11.062765  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 27/120
	I0429 12:52:12.064700  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 28/120
	I0429 12:52:13.066363  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 29/120
	I0429 12:52:14.068312  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 30/120
	I0429 12:52:15.070133  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 31/120
	I0429 12:52:16.071626  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 32/120
	I0429 12:52:17.073919  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 33/120
	I0429 12:52:18.075811  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 34/120
	I0429 12:52:19.077831  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 35/120
	I0429 12:52:20.079244  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 36/120
	I0429 12:52:21.080611  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 37/120
	I0429 12:52:22.082093  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 38/120
	I0429 12:52:23.083405  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 39/120
	I0429 12:52:24.085750  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 40/120
	I0429 12:52:25.087245  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 41/120
	I0429 12:52:26.088524  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 42/120
	I0429 12:52:27.090072  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 43/120
	I0429 12:52:28.091677  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 44/120
	I0429 12:52:29.094044  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 45/120
	I0429 12:52:30.095646  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 46/120
	I0429 12:52:31.097297  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 47/120
	I0429 12:52:32.098715  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 48/120
	I0429 12:52:33.100294  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 49/120
	I0429 12:52:34.102541  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 50/120
	I0429 12:52:35.104223  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 51/120
	I0429 12:52:36.105721  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 52/120
	I0429 12:52:37.107375  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 53/120
	I0429 12:52:38.108794  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 54/120
	I0429 12:52:39.110929  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 55/120
	I0429 12:52:40.112500  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 56/120
	I0429 12:52:41.114439  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 57/120
	I0429 12:52:42.116178  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 58/120
	I0429 12:52:43.117467  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 59/120
	I0429 12:52:44.119213  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 60/120
	I0429 12:52:45.120729  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 61/120
	I0429 12:52:46.122908  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 62/120
	I0429 12:52:47.124390  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 63/120
	I0429 12:52:48.125844  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 64/120
	I0429 12:52:49.127949  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 65/120
	I0429 12:52:50.129479  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 66/120
	I0429 12:52:51.130959  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 67/120
	I0429 12:52:52.132623  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 68/120
	I0429 12:52:53.134623  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 69/120
	I0429 12:52:54.137030  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 70/120
	I0429 12:52:55.138261  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 71/120
	I0429 12:52:56.139862  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 72/120
	I0429 12:52:57.142301  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 73/120
	I0429 12:52:58.143753  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 74/120
	I0429 12:52:59.145567  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 75/120
	I0429 12:53:00.146990  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 76/120
	I0429 12:53:01.148479  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 77/120
	I0429 12:53:02.151025  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 78/120
	I0429 12:53:03.152850  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 79/120
	I0429 12:53:04.155071  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 80/120
	I0429 12:53:05.156601  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 81/120
	I0429 12:53:06.158369  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 82/120
	I0429 12:53:07.160074  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 83/120
	I0429 12:53:08.162084  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 84/120
	I0429 12:53:09.164036  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 85/120
	I0429 12:53:10.166559  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 86/120
	I0429 12:53:11.168342  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 87/120
	I0429 12:53:12.169998  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 88/120
	I0429 12:53:13.171464  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 89/120
	I0429 12:53:14.173491  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 90/120
	I0429 12:53:15.174968  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 91/120
	I0429 12:53:16.176705  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 92/120
	I0429 12:53:17.178565  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 93/120
	I0429 12:53:18.180110  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 94/120
	I0429 12:53:19.182006  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 95/120
	I0429 12:53:20.183787  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 96/120
	I0429 12:53:21.186076  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 97/120
	I0429 12:53:22.188260  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 98/120
	I0429 12:53:23.189719  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 99/120
	I0429 12:53:24.192321  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 100/120
	I0429 12:53:25.193860  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 101/120
	I0429 12:53:26.195341  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 102/120
	I0429 12:53:27.196759  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 103/120
	I0429 12:53:28.198402  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 104/120
	I0429 12:53:29.199811  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 105/120
	I0429 12:53:30.201944  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 106/120
	I0429 12:53:31.203660  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 107/120
	I0429 12:53:32.205911  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 108/120
	I0429 12:53:33.207406  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 109/120
	I0429 12:53:34.209702  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 110/120
	I0429 12:53:35.211206  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 111/120
	I0429 12:53:36.213121  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 112/120
	I0429 12:53:37.215147  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 113/120
	I0429 12:53:38.216790  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 114/120
	I0429 12:53:39.218790  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 115/120
	I0429 12:53:40.220335  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 116/120
	I0429 12:53:41.221818  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 117/120
	I0429 12:53:42.223909  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 118/120
	I0429 12:53:43.226184  874259 main.go:141] libmachine: (ha-212075-m02) Waiting for machine to stop 119/120
	I0429 12:53:44.227133  874259 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 12:53:44.227330  874259 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-212075 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
E0429 12:54:03.096495  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (19.314369201s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:53:44.293919  874686 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:53:44.294085  874686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:53:44.294095  874686 out.go:304] Setting ErrFile to fd 2...
	I0429 12:53:44.294102  874686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:53:44.294330  874686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:53:44.294625  874686 out.go:298] Setting JSON to false
	I0429 12:53:44.294676  874686 mustload.go:65] Loading cluster: ha-212075
	I0429 12:53:44.294788  874686 notify.go:220] Checking for updates...
	I0429 12:53:44.295219  874686 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:53:44.295245  874686 status.go:255] checking status of ha-212075 ...
	I0429 12:53:44.295830  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:53:44.295917  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:53:44.313734  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0429 12:53:44.314406  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:53:44.315144  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:53:44.315177  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:53:44.315848  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:53:44.316134  874686 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:53:44.318141  874686 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:53:44.318170  874686 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:53:44.318640  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:53:44.318707  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:53:44.336279  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I0429 12:53:44.336751  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:53:44.337261  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:53:44.337285  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:53:44.337662  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:53:44.337956  874686 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:53:44.341460  874686 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:53:44.341955  874686 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:53:44.342010  874686 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:53:44.342202  874686 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:53:44.342547  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:53:44.342604  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:53:44.359428  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0429 12:53:44.359920  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:53:44.360566  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:53:44.360595  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:53:44.360976  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:53:44.361184  874686 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:53:44.361413  874686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:53:44.361462  874686 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:53:44.364614  874686 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:53:44.365083  874686 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:53:44.365115  874686 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:53:44.365320  874686 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:53:44.365560  874686 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:53:44.365733  874686 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:53:44.365876  874686 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:53:44.457453  874686 ssh_runner.go:195] Run: systemctl --version
	I0429 12:53:44.465590  874686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:53:44.486752  874686 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:53:44.486790  874686 api_server.go:166] Checking apiserver status ...
	I0429 12:53:44.486828  874686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:53:44.506280  874686 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:53:44.517369  874686 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:53:44.517443  874686 ssh_runner.go:195] Run: ls
	I0429 12:53:44.522619  874686 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:53:44.527414  874686 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:53:44.527451  874686 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:53:44.527463  874686 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:53:44.527485  874686 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:53:44.527934  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:53:44.527997  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:53:44.543955  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0429 12:53:44.544582  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:53:44.545113  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:53:44.545141  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:53:44.545536  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:53:44.545747  874686 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:53:44.547449  874686 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:53:44.547471  874686 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:53:44.547791  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:53:44.547817  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:53:44.563436  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I0429 12:53:44.563953  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:53:44.564461  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:53:44.564489  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:53:44.564848  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:53:44.565062  874686 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:53:44.568092  874686 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:53:44.568538  874686 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:53:44.568576  874686 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:53:44.568781  874686 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:53:44.569111  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:53:44.569156  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:53:44.585024  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0429 12:53:44.585545  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:53:44.586065  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:53:44.586090  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:53:44.586439  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:53:44.586620  874686 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:53:44.586830  874686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:53:44.586853  874686 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:53:44.589978  874686 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:53:44.590432  874686 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:53:44.590460  874686 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:53:44.590661  874686 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:53:44.590881  874686 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:53:44.591082  874686 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:53:44.591234  874686 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:03.151638  874686 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:03.151758  874686 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:03.151773  874686 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:03.151784  874686 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:03.151811  874686 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:03.151818  874686 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:03.152250  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:03.152304  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:03.169267  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0429 12:54:03.169857  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:03.170540  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:54:03.170575  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:03.171034  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:03.171321  874686 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:03.173274  874686 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:03.173301  874686 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:03.173756  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:03.173820  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:03.190705  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0429 12:54:03.191267  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:03.191927  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:54:03.191960  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:03.192423  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:03.192692  874686 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:03.196102  874686 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:03.196581  874686 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:03.196612  874686 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:03.196844  874686 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:03.197397  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:03.197458  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:03.213920  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0429 12:54:03.214458  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:03.214993  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:54:03.215018  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:03.215398  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:03.215613  874686 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:03.215838  874686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:03.215862  874686 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:03.218444  874686 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:03.218860  874686 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:03.218884  874686 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:03.219083  874686 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:03.219286  874686 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:03.219449  874686 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:03.219614  874686 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:03.309972  874686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:03.333716  874686 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:03.333753  874686 api_server.go:166] Checking apiserver status ...
	I0429 12:54:03.333788  874686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:03.350378  874686 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:03.361043  874686 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:03.361111  874686 ssh_runner.go:195] Run: ls
	I0429 12:54:03.366524  874686 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:03.371528  874686 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:03.371568  874686 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:03.371582  874686 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:03.371611  874686 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:03.372029  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:03.372076  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:03.388569  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I0429 12:54:03.389061  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:03.389568  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:54:03.389595  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:03.390131  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:03.390368  874686 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:03.392081  874686 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:03.392104  874686 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:03.392395  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:03.392428  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:03.408758  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I0429 12:54:03.409299  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:03.409902  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:54:03.409935  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:03.410293  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:03.410468  874686 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:03.413719  874686 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:03.414250  874686 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:03.414275  874686 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:03.414500  874686 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:03.414811  874686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:03.414855  874686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:03.431986  874686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
	I0429 12:54:03.432546  874686 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:03.433130  874686 main.go:141] libmachine: Using API Version  1
	I0429 12:54:03.433159  874686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:03.433549  874686 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:03.433761  874686 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:03.433956  874686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:03.433974  874686 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:03.436938  874686 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:03.437438  874686 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:03.437474  874686 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:03.437585  874686 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:03.437801  874686 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:03.437968  874686 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:03.438144  874686 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:03.521905  874686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:03.540860  874686 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-212075 -n ha-212075
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-212075 logs -n 25: (1.624334641s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075:/home/docker/cp-test_ha-212075-m03_ha-212075.txt                       |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075 sudo cat                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075.txt                                 |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m04 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp testdata/cp-test.txt                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075:/home/docker/cp-test_ha-212075-m04_ha-212075.txt                       |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075 sudo cat                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075.txt                                 |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03:/home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m03 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-212075 node stop m02 -v=7                                                     | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:47:10
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:47:10.677919  870218 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:47:10.678233  870218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:47:10.678243  870218 out.go:304] Setting ErrFile to fd 2...
	I0429 12:47:10.678248  870218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:47:10.678446  870218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:47:10.679112  870218 out.go:298] Setting JSON to false
	I0429 12:47:10.680123  870218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":77376,"bootTime":1714317455,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:47:10.680195  870218 start.go:139] virtualization: kvm guest
	I0429 12:47:10.682364  870218 out.go:177] * [ha-212075] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:47:10.683575  870218 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:47:10.683620  870218 notify.go:220] Checking for updates...
	I0429 12:47:10.684719  870218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:47:10.686075  870218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:47:10.687233  870218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:47:10.688452  870218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:47:10.689537  870218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:47:10.690735  870218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:47:10.726918  870218 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 12:47:10.728083  870218 start.go:297] selected driver: kvm2
	I0429 12:47:10.728096  870218 start.go:901] validating driver "kvm2" against <nil>
	I0429 12:47:10.728109  870218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:47:10.728816  870218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:47:10.728911  870218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:47:10.744767  870218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:47:10.744835  870218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:47:10.745104  870218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:47:10.745163  870218 cni.go:84] Creating CNI manager for ""
	I0429 12:47:10.745175  870218 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:47:10.745180  870218 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:47:10.745248  870218 start.go:340] cluster config:
	{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:47:10.745350  870218 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:47:10.747127  870218 out.go:177] * Starting "ha-212075" primary control-plane node in "ha-212075" cluster
	I0429 12:47:10.748332  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:47:10.748369  870218 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 12:47:10.748377  870218 cache.go:56] Caching tarball of preloaded images
	I0429 12:47:10.748457  870218 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:47:10.748467  870218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:47:10.748770  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:47:10.748791  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json: {Name:mkcbad01c1c0b2ec15b4df8b0dfb07d2b34331f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:10.749013  870218 start.go:360] acquireMachinesLock for ha-212075: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:47:10.749049  870218 start.go:364] duration metric: took 18.822µs to acquireMachinesLock for "ha-212075"
	I0429 12:47:10.749068  870218 start.go:93] Provisioning new machine with config: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:47:10.749132  870218 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 12:47:10.750711  870218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:47:10.750854  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:47:10.750892  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:47:10.766284  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0429 12:47:10.766814  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:47:10.767483  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:47:10.767507  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:47:10.767857  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:47:10.768171  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:10.768384  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:10.768534  870218 start.go:159] libmachine.API.Create for "ha-212075" (driver="kvm2")
	I0429 12:47:10.768583  870218 client.go:168] LocalClient.Create starting
	I0429 12:47:10.768617  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 12:47:10.768656  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:47:10.768671  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:47:10.768720  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 12:47:10.768743  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:47:10.768756  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:47:10.768775  870218 main.go:141] libmachine: Running pre-create checks...
	I0429 12:47:10.768787  870218 main.go:141] libmachine: (ha-212075) Calling .PreCreateCheck
	I0429 12:47:10.769168  870218 main.go:141] libmachine: (ha-212075) Calling .GetConfigRaw
	I0429 12:47:10.769571  870218 main.go:141] libmachine: Creating machine...
	I0429 12:47:10.769586  870218 main.go:141] libmachine: (ha-212075) Calling .Create
	I0429 12:47:10.769732  870218 main.go:141] libmachine: (ha-212075) Creating KVM machine...
	I0429 12:47:10.771123  870218 main.go:141] libmachine: (ha-212075) DBG | found existing default KVM network
	I0429 12:47:10.771904  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:10.771751  870241 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0429 12:47:10.771972  870218 main.go:141] libmachine: (ha-212075) DBG | created network xml: 
	I0429 12:47:10.771992  870218 main.go:141] libmachine: (ha-212075) DBG | <network>
	I0429 12:47:10.772001  870218 main.go:141] libmachine: (ha-212075) DBG |   <name>mk-ha-212075</name>
	I0429 12:47:10.772006  870218 main.go:141] libmachine: (ha-212075) DBG |   <dns enable='no'/>
	I0429 12:47:10.772013  870218 main.go:141] libmachine: (ha-212075) DBG |   
	I0429 12:47:10.772019  870218 main.go:141] libmachine: (ha-212075) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 12:47:10.772027  870218 main.go:141] libmachine: (ha-212075) DBG |     <dhcp>
	I0429 12:47:10.772033  870218 main.go:141] libmachine: (ha-212075) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 12:47:10.772045  870218 main.go:141] libmachine: (ha-212075) DBG |     </dhcp>
	I0429 12:47:10.772053  870218 main.go:141] libmachine: (ha-212075) DBG |   </ip>
	I0429 12:47:10.772063  870218 main.go:141] libmachine: (ha-212075) DBG |   
	I0429 12:47:10.772068  870218 main.go:141] libmachine: (ha-212075) DBG | </network>
	I0429 12:47:10.772075  870218 main.go:141] libmachine: (ha-212075) DBG | 
	I0429 12:47:10.777807  870218 main.go:141] libmachine: (ha-212075) DBG | trying to create private KVM network mk-ha-212075 192.168.39.0/24...
	I0429 12:47:10.853260  870218 main.go:141] libmachine: (ha-212075) DBG | private KVM network mk-ha-212075 192.168.39.0/24 created
	I0429 12:47:10.853342  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:10.853194  870241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:47:10.853364  870218 main.go:141] libmachine: (ha-212075) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075 ...
	I0429 12:47:10.853387  870218 main.go:141] libmachine: (ha-212075) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:47:10.853475  870218 main.go:141] libmachine: (ha-212075) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:47:11.125251  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:11.125115  870241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa...
	I0429 12:47:11.350613  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:11.350414  870241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/ha-212075.rawdisk...
	I0429 12:47:11.350656  870218 main.go:141] libmachine: (ha-212075) DBG | Writing magic tar header
	I0429 12:47:11.350671  870218 main.go:141] libmachine: (ha-212075) DBG | Writing SSH key tar header
	I0429 12:47:11.350683  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:11.350536  870241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075 ...
	I0429 12:47:11.350697  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075
	I0429 12:47:11.350709  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 12:47:11.350722  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075 (perms=drwx------)
	I0429 12:47:11.350739  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:47:11.350747  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:47:11.350754  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 12:47:11.350764  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 12:47:11.350775  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:47:11.350787  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:47:11.350799  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 12:47:11.350806  870218 main.go:141] libmachine: (ha-212075) Creating domain...
	I0429 12:47:11.350818  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:47:11.350832  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:47:11.350922  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home
	I0429 12:47:11.350949  870218 main.go:141] libmachine: (ha-212075) DBG | Skipping /home - not owner
	I0429 12:47:11.352182  870218 main.go:141] libmachine: (ha-212075) define libvirt domain using xml: 
	I0429 12:47:11.352232  870218 main.go:141] libmachine: (ha-212075) <domain type='kvm'>
	I0429 12:47:11.352268  870218 main.go:141] libmachine: (ha-212075)   <name>ha-212075</name>
	I0429 12:47:11.352294  870218 main.go:141] libmachine: (ha-212075)   <memory unit='MiB'>2200</memory>
	I0429 12:47:11.352305  870218 main.go:141] libmachine: (ha-212075)   <vcpu>2</vcpu>
	I0429 12:47:11.352316  870218 main.go:141] libmachine: (ha-212075)   <features>
	I0429 12:47:11.352326  870218 main.go:141] libmachine: (ha-212075)     <acpi/>
	I0429 12:47:11.352337  870218 main.go:141] libmachine: (ha-212075)     <apic/>
	I0429 12:47:11.352346  870218 main.go:141] libmachine: (ha-212075)     <pae/>
	I0429 12:47:11.352372  870218 main.go:141] libmachine: (ha-212075)     
	I0429 12:47:11.352384  870218 main.go:141] libmachine: (ha-212075)   </features>
	I0429 12:47:11.352396  870218 main.go:141] libmachine: (ha-212075)   <cpu mode='host-passthrough'>
	I0429 12:47:11.352405  870218 main.go:141] libmachine: (ha-212075)   
	I0429 12:47:11.352415  870218 main.go:141] libmachine: (ha-212075)   </cpu>
	I0429 12:47:11.352424  870218 main.go:141] libmachine: (ha-212075)   <os>
	I0429 12:47:11.352435  870218 main.go:141] libmachine: (ha-212075)     <type>hvm</type>
	I0429 12:47:11.352446  870218 main.go:141] libmachine: (ha-212075)     <boot dev='cdrom'/>
	I0429 12:47:11.352456  870218 main.go:141] libmachine: (ha-212075)     <boot dev='hd'/>
	I0429 12:47:11.352472  870218 main.go:141] libmachine: (ha-212075)     <bootmenu enable='no'/>
	I0429 12:47:11.352482  870218 main.go:141] libmachine: (ha-212075)   </os>
	I0429 12:47:11.352492  870218 main.go:141] libmachine: (ha-212075)   <devices>
	I0429 12:47:11.352506  870218 main.go:141] libmachine: (ha-212075)     <disk type='file' device='cdrom'>
	I0429 12:47:11.352528  870218 main.go:141] libmachine: (ha-212075)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/boot2docker.iso'/>
	I0429 12:47:11.352546  870218 main.go:141] libmachine: (ha-212075)       <target dev='hdc' bus='scsi'/>
	I0429 12:47:11.352558  870218 main.go:141] libmachine: (ha-212075)       <readonly/>
	I0429 12:47:11.352571  870218 main.go:141] libmachine: (ha-212075)     </disk>
	I0429 12:47:11.352580  870218 main.go:141] libmachine: (ha-212075)     <disk type='file' device='disk'>
	I0429 12:47:11.352595  870218 main.go:141] libmachine: (ha-212075)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:47:11.352614  870218 main.go:141] libmachine: (ha-212075)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/ha-212075.rawdisk'/>
	I0429 12:47:11.352628  870218 main.go:141] libmachine: (ha-212075)       <target dev='hda' bus='virtio'/>
	I0429 12:47:11.352643  870218 main.go:141] libmachine: (ha-212075)     </disk>
	I0429 12:47:11.352660  870218 main.go:141] libmachine: (ha-212075)     <interface type='network'>
	I0429 12:47:11.352671  870218 main.go:141] libmachine: (ha-212075)       <source network='mk-ha-212075'/>
	I0429 12:47:11.352679  870218 main.go:141] libmachine: (ha-212075)       <model type='virtio'/>
	I0429 12:47:11.352691  870218 main.go:141] libmachine: (ha-212075)     </interface>
	I0429 12:47:11.352704  870218 main.go:141] libmachine: (ha-212075)     <interface type='network'>
	I0429 12:47:11.352714  870218 main.go:141] libmachine: (ha-212075)       <source network='default'/>
	I0429 12:47:11.352726  870218 main.go:141] libmachine: (ha-212075)       <model type='virtio'/>
	I0429 12:47:11.352736  870218 main.go:141] libmachine: (ha-212075)     </interface>
	I0429 12:47:11.352747  870218 main.go:141] libmachine: (ha-212075)     <serial type='pty'>
	I0429 12:47:11.352760  870218 main.go:141] libmachine: (ha-212075)       <target port='0'/>
	I0429 12:47:11.352770  870218 main.go:141] libmachine: (ha-212075)     </serial>
	I0429 12:47:11.352783  870218 main.go:141] libmachine: (ha-212075)     <console type='pty'>
	I0429 12:47:11.352796  870218 main.go:141] libmachine: (ha-212075)       <target type='serial' port='0'/>
	I0429 12:47:11.352807  870218 main.go:141] libmachine: (ha-212075)     </console>
	I0429 12:47:11.352818  870218 main.go:141] libmachine: (ha-212075)     <rng model='virtio'>
	I0429 12:47:11.352833  870218 main.go:141] libmachine: (ha-212075)       <backend model='random'>/dev/random</backend>
	I0429 12:47:11.352844  870218 main.go:141] libmachine: (ha-212075)     </rng>
	I0429 12:47:11.352851  870218 main.go:141] libmachine: (ha-212075)     
	I0429 12:47:11.352858  870218 main.go:141] libmachine: (ha-212075)     
	I0429 12:47:11.352867  870218 main.go:141] libmachine: (ha-212075)   </devices>
	I0429 12:47:11.352879  870218 main.go:141] libmachine: (ha-212075) </domain>
	I0429 12:47:11.352888  870218 main.go:141] libmachine: (ha-212075) 
	I0429 12:47:11.358129  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:b9:e2:15 in network default
	I0429 12:47:11.358761  870218 main.go:141] libmachine: (ha-212075) Ensuring networks are active...
	I0429 12:47:11.358785  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:11.359625  870218 main.go:141] libmachine: (ha-212075) Ensuring network default is active
	I0429 12:47:11.359939  870218 main.go:141] libmachine: (ha-212075) Ensuring network mk-ha-212075 is active
	I0429 12:47:11.360450  870218 main.go:141] libmachine: (ha-212075) Getting domain xml...
	I0429 12:47:11.361219  870218 main.go:141] libmachine: (ha-212075) Creating domain...
	I0429 12:47:12.584589  870218 main.go:141] libmachine: (ha-212075) Waiting to get IP...
	I0429 12:47:12.585394  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:12.585797  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:12.585868  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:12.585792  870241 retry.go:31] will retry after 305.881234ms: waiting for machine to come up
	I0429 12:47:12.893551  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:12.894049  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:12.894079  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:12.894024  870241 retry.go:31] will retry after 344.55293ms: waiting for machine to come up
	I0429 12:47:13.241013  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:13.241469  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:13.241496  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:13.241434  870241 retry.go:31] will retry after 343.048472ms: waiting for machine to come up
	I0429 12:47:13.586141  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:13.586605  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:13.586654  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:13.586558  870241 retry.go:31] will retry after 450.225843ms: waiting for machine to come up
	I0429 12:47:14.038240  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:14.038757  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:14.038783  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:14.038699  870241 retry.go:31] will retry after 523.602131ms: waiting for machine to come up
	I0429 12:47:14.563556  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:14.564014  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:14.564045  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:14.563929  870241 retry.go:31] will retry after 805.259699ms: waiting for machine to come up
	I0429 12:47:15.371056  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:15.371475  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:15.371526  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:15.371456  870241 retry.go:31] will retry after 966.64669ms: waiting for machine to come up
	I0429 12:47:16.339433  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:16.339834  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:16.339867  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:16.339785  870241 retry.go:31] will retry after 1.23057243s: waiting for machine to come up
	I0429 12:47:17.572420  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:17.572903  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:17.572937  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:17.572841  870241 retry.go:31] will retry after 1.383346304s: waiting for machine to come up
	I0429 12:47:18.958480  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:18.958907  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:18.958936  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:18.958868  870241 retry.go:31] will retry after 1.674064931s: waiting for machine to come up
	I0429 12:47:20.634352  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:20.634768  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:20.634806  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:20.634700  870241 retry.go:31] will retry after 2.486061293s: waiting for machine to come up
	I0429 12:47:23.122390  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:23.122875  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:23.122898  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:23.122835  870241 retry.go:31] will retry after 2.897978896s: waiting for machine to come up
	I0429 12:47:26.022310  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:26.022740  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:26.022767  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:26.022711  870241 retry.go:31] will retry after 2.882393702s: waiting for machine to come up
	I0429 12:47:28.908794  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:28.909215  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:28.909242  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:28.909174  870241 retry.go:31] will retry after 5.119530721s: waiting for machine to come up
	I0429 12:47:34.030038  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.030480  870218 main.go:141] libmachine: (ha-212075) Found IP for machine: 192.168.39.97
	I0429 12:47:34.030515  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has current primary IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.030535  870218 main.go:141] libmachine: (ha-212075) Reserving static IP address...
	I0429 12:47:34.030917  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find host DHCP lease matching {name: "ha-212075", mac: "52:54:00:c0:56:df", ip: "192.168.39.97"} in network mk-ha-212075
	I0429 12:47:34.119852  870218 main.go:141] libmachine: (ha-212075) DBG | Getting to WaitForSSH function...
	I0429 12:47:34.119883  870218 main.go:141] libmachine: (ha-212075) Reserved static IP address: 192.168.39.97
	I0429 12:47:34.119897  870218 main.go:141] libmachine: (ha-212075) Waiting for SSH to be available...
	I0429 12:47:34.122597  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.123056  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.123087  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.123278  870218 main.go:141] libmachine: (ha-212075) DBG | Using SSH client type: external
	I0429 12:47:34.123304  870218 main.go:141] libmachine: (ha-212075) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa (-rw-------)
	I0429 12:47:34.123350  870218 main.go:141] libmachine: (ha-212075) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:47:34.123423  870218 main.go:141] libmachine: (ha-212075) DBG | About to run SSH command:
	I0429 12:47:34.123437  870218 main.go:141] libmachine: (ha-212075) DBG | exit 0
	I0429 12:47:34.255439  870218 main.go:141] libmachine: (ha-212075) DBG | SSH cmd err, output: <nil>: 
	I0429 12:47:34.255688  870218 main.go:141] libmachine: (ha-212075) KVM machine creation complete!
	I0429 12:47:34.256038  870218 main.go:141] libmachine: (ha-212075) Calling .GetConfigRaw
	I0429 12:47:34.256561  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:34.256768  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:34.256961  870218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:47:34.256974  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:47:34.258344  870218 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:47:34.258360  870218 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:47:34.258367  870218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:47:34.258376  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.260892  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.261301  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.261335  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.261457  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.261727  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.261875  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.261974  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.262143  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.262365  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.262379  870218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:47:34.375070  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:47:34.375104  870218 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:47:34.375116  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.377839  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.378225  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.378270  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.378421  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.378603  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.378801  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.378939  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.379246  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.379463  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.379477  870218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:47:34.496702  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:47:34.496834  870218 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:47:34.496850  870218 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:47:34.496862  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:34.497129  870218 buildroot.go:166] provisioning hostname "ha-212075"
	I0429 12:47:34.497157  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:34.497396  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.500100  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.500522  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.500549  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.500743  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.500984  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.501170  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.501329  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.501491  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.501694  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.501709  870218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075 && echo "ha-212075" | sudo tee /etc/hostname
	I0429 12:47:34.639970  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075
	
	I0429 12:47:34.640000  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.643277  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.643700  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.643736  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.643969  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.644183  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.644395  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.644531  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.644725  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.644909  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.644929  870218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:47:34.769813  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:47:34.769862  870218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:47:34.769887  870218 buildroot.go:174] setting up certificates
	I0429 12:47:34.769902  870218 provision.go:84] configureAuth start
	I0429 12:47:34.769920  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:34.770254  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:34.773213  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.773664  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.773697  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.773877  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.776462  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.776823  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.776853  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.777023  870218 provision.go:143] copyHostCerts
	I0429 12:47:34.777061  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:47:34.777107  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:47:34.777120  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:47:34.777220  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:47:34.777336  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:47:34.777363  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:47:34.777371  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:47:34.777417  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:47:34.777495  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:47:34.777518  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:47:34.777525  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:47:34.777561  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:47:34.777648  870218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075 san=[127.0.0.1 192.168.39.97 ha-212075 localhost minikube]
	I0429 12:47:34.986246  870218 provision.go:177] copyRemoteCerts
	I0429 12:47:34.986315  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:47:34.986343  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.989211  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.989554  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.989587  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.989780  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.990033  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.990225  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.990326  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.077898  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:47:35.078002  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:47:35.103811  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:47:35.103903  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:47:35.130521  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:47:35.130625  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:47:35.158296  870218 provision.go:87] duration metric: took 388.331009ms to configureAuth
	I0429 12:47:35.158348  870218 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:47:35.158647  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:47:35.158755  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.162096  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.162516  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.162550  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.162789  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.163036  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.163228  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.163376  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.163547  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:35.163779  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:35.163806  870218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:47:35.454761  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:47:35.454801  870218 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:47:35.454812  870218 main.go:141] libmachine: (ha-212075) Calling .GetURL
	I0429 12:47:35.456291  870218 main.go:141] libmachine: (ha-212075) DBG | Using libvirt version 6000000
	I0429 12:47:35.459567  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.459976  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.460009  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.460156  870218 main.go:141] libmachine: Docker is up and running!
	I0429 12:47:35.460174  870218 main.go:141] libmachine: Reticulating splines...
	I0429 12:47:35.460182  870218 client.go:171] duration metric: took 24.691589554s to LocalClient.Create
	I0429 12:47:35.460213  870218 start.go:167] duration metric: took 24.691680665s to libmachine.API.Create "ha-212075"
	I0429 12:47:35.460226  870218 start.go:293] postStartSetup for "ha-212075" (driver="kvm2")
	I0429 12:47:35.460240  870218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:47:35.460264  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.460530  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:47:35.460565  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.462997  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.463401  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.463421  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.463619  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.463842  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.463989  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.464114  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.554760  870218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:47:35.559334  870218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:47:35.559381  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:47:35.559459  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:47:35.559534  870218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:47:35.559544  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:47:35.559645  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:47:35.569671  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:47:35.596267  870218 start.go:296] duration metric: took 136.022682ms for postStartSetup
	I0429 12:47:35.596345  870218 main.go:141] libmachine: (ha-212075) Calling .GetConfigRaw
	I0429 12:47:35.596981  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:35.599978  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.600353  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.600383  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.600634  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:47:35.600866  870218 start.go:128] duration metric: took 24.851721937s to createHost
	I0429 12:47:35.600897  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.603199  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.603644  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.603674  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.603745  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.603970  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.604159  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.604339  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.604533  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:35.604712  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:35.604729  870218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:47:35.720740  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394855.695919494
	
	I0429 12:47:35.720765  870218 fix.go:216] guest clock: 1714394855.695919494
	I0429 12:47:35.720776  870218 fix.go:229] Guest: 2024-04-29 12:47:35.695919494 +0000 UTC Remote: 2024-04-29 12:47:35.600880557 +0000 UTC m=+24.976033512 (delta=95.038937ms)
	I0429 12:47:35.720806  870218 fix.go:200] guest clock delta is within tolerance: 95.038937ms
	I0429 12:47:35.720812  870218 start.go:83] releasing machines lock for "ha-212075", held for 24.971753151s
	I0429 12:47:35.720837  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.721124  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:35.723665  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.724032  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.724067  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.724221  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.724774  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.724980  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.725072  870218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:47:35.725118  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.725227  870218 ssh_runner.go:195] Run: cat /version.json
	I0429 12:47:35.725260  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.728155  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728311  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728537  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.728566  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728669  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.728701  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728731  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.728877  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.728952  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.729132  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.729136  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.729304  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.729322  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.729443  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.837269  870218 ssh_runner.go:195] Run: systemctl --version
	I0429 12:47:35.844092  870218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:47:36.005807  870218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:47:36.012800  870218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:47:36.012902  870218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:47:36.030274  870218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:47:36.030311  870218 start.go:494] detecting cgroup driver to use...
	I0429 12:47:36.030402  870218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:47:36.046974  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:47:36.061895  870218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:47:36.061982  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:47:36.076800  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:47:36.091454  870218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:47:36.211024  870218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:47:36.351716  870218 docker.go:233] disabling docker service ...
	I0429 12:47:36.351802  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:47:36.367728  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:47:36.381746  870218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:47:36.521448  870218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:47:36.643441  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:47:36.658397  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:47:36.678361  870218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:47:36.678431  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.690326  870218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:47:36.690412  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.702156  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.714496  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.726598  870218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:47:36.739087  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.750643  870218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.769859  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.781534  870218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:47:36.791887  870218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:47:36.791968  870218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:47:36.806869  870218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:47:36.817796  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:36.937337  870218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:47:37.079704  870218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:47:37.079785  870218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:47:37.084919  870218 start.go:562] Will wait 60s for crictl version
	I0429 12:47:37.085054  870218 ssh_runner.go:195] Run: which crictl
	I0429 12:47:37.089276  870218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:47:37.132022  870218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:47:37.132124  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:47:37.162365  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:47:37.194434  870218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:47:37.195855  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:37.198800  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:37.199235  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:37.199265  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:37.199505  870218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:47:37.203973  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:37.217992  870218 kubeadm.go:877] updating cluster {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:47:37.218119  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:47:37.218170  870218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:47:37.254058  870218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 12:47:37.254136  870218 ssh_runner.go:195] Run: which lz4
	I0429 12:47:37.258626  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:47:37.258735  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:47:37.263492  870218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:47:37.263531  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 12:47:38.827807  870218 crio.go:462] duration metric: took 1.569087769s to copy over tarball
	I0429 12:47:38.827894  870218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:47:41.114738  870218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286814073s)
	I0429 12:47:41.114772  870218 crio.go:469] duration metric: took 2.286930667s to extract the tarball
	I0429 12:47:41.114780  870218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:47:41.153797  870218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:47:41.199426  870218 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:47:41.199455  870218 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:47:41.199464  870218 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.30.0 crio true true} ...
	I0429 12:47:41.199578  870218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:47:41.199653  870218 ssh_runner.go:195] Run: crio config
	I0429 12:47:41.250499  870218 cni.go:84] Creating CNI manager for ""
	I0429 12:47:41.250526  870218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:47:41.250537  870218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:47:41.250559  870218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-212075 NodeName:ha-212075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:47:41.250705  870218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-212075"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:47:41.250732  870218 kube-vip.go:111] generating kube-vip config ...
	I0429 12:47:41.250777  870218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:47:41.269494  870218 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:47:41.269627  870218 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:47:41.269685  870218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:47:41.280555  870218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:47:41.280644  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:47:41.291333  870218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:47:41.310105  870218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:47:41.328730  870218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 12:47:41.347634  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0429 12:47:41.365931  870218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:47:41.370302  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:41.383866  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:41.512387  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:47:41.530551  870218 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.97
	I0429 12:47:41.530581  870218 certs.go:194] generating shared ca certs ...
	I0429 12:47:41.530604  870218 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:41.530779  870218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:47:41.530833  870218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:47:41.530848  870218 certs.go:256] generating profile certs ...
	I0429 12:47:41.530915  870218 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:47:41.530951  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt with IP's: []
	I0429 12:47:41.722700  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt ...
	I0429 12:47:41.722741  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt: {Name:mk4f0aba10f064735148f15f887ea67a1137a3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:41.722964  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key ...
	I0429 12:47:41.722982  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key: {Name:mkaee3859a995806ed485f81a0abcc895804c08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:41.723093  870218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294
	I0429 12:47:41.723113  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.254]
	I0429 12:47:42.017824  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294 ...
	I0429 12:47:42.017866  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294: {Name:mk192112794ed2eccfcd600bb5d5c95e549cded1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.018075  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294 ...
	I0429 12:47:42.018095  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294: {Name:mka4cfea4c0ea9fde780611847d7c0973ea6230b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.018201  870218 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:47:42.018327  870218 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:47:42.018421  870218 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:47:42.018445  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt with IP's: []
	I0429 12:47:42.170382  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt ...
	I0429 12:47:42.170425  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt: {Name:mk2c7c2efcf55e17dae029e8d8b23a5d23f2d657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.170634  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key ...
	I0429 12:47:42.170652  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key: {Name:mk8ce0121b4c42805c1956fc1acf6c7e5ee80e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.170754  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:47:42.170777  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:47:42.170794  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:47:42.170816  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:47:42.170839  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:47:42.170869  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:47:42.170896  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:47:42.170915  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:47:42.170984  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:47:42.171032  870218 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:47:42.171059  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:47:42.171091  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:47:42.171131  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:47:42.171221  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:47:42.171302  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:47:42.171349  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.171385  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.171405  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.172051  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:47:42.204925  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:47:42.233610  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:47:42.265193  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:47:42.292433  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:47:42.326458  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:47:42.356962  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:47:42.386083  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:47:42.418703  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:47:42.450096  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:47:42.477246  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:47:42.506751  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:47:42.528244  870218 ssh_runner.go:195] Run: openssl version
	I0429 12:47:42.534899  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:47:42.548693  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.554530  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.554605  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.561916  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:47:42.575762  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:47:42.589167  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.594723  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.594814  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.602038  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:47:42.615369  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:47:42.628913  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.634531  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.634595  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.641566  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:47:42.655039  870218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:47:42.659883  870218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:47:42.659966  870218 kubeadm.go:391] StartCluster: {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:47:42.660127  870218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 12:47:42.660199  870218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 12:47:42.701855  870218 cri.go:89] found id: ""
	I0429 12:47:42.701934  870218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:47:42.712771  870218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:47:42.723461  870218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:47:42.735142  870218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:47:42.735166  870218 kubeadm.go:156] found existing configuration files:
	
	I0429 12:47:42.735214  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:47:42.745568  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:47:42.745651  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:47:42.756257  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:47:42.767063  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:47:42.767142  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:47:42.778879  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:47:42.789320  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:47:42.789402  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:47:42.800240  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:47:42.810530  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:47:42.810605  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:47:42.821530  870218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:47:42.934280  870218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:47:42.934357  870218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:47:43.080585  870218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:47:43.080749  870218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:47:43.080883  870218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:47:43.332866  870218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:47:43.342245  870218 out.go:204]   - Generating certificates and keys ...
	I0429 12:47:43.342415  870218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:47:43.342525  870218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:47:43.667747  870218 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:47:43.718862  870218 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:47:43.877370  870218 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:47:43.955431  870218 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:47:44.025648  870218 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:47:44.025818  870218 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-212075 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0429 12:47:44.114660  870218 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:47:44.114839  870218 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-212075 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0429 12:47:44.454769  870218 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:47:44.530738  870218 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:47:44.602193  870218 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:47:44.602286  870218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:47:44.691879  870218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:47:44.867537  870218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:47:44.989037  870218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:47:45.099892  870218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:47:45.247531  870218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:47:45.248041  870218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:47:45.251054  870218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:47:45.254247  870218 out.go:204]   - Booting up control plane ...
	I0429 12:47:45.254379  870218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:47:45.254461  870218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:47:45.254540  870218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:47:45.269826  870218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:47:45.270735  870218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:47:45.270791  870218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:47:45.415518  870218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:47:45.415641  870218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:47:46.416954  870218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002166461s
	I0429 12:47:46.417061  870218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:47:52.416938  870218 kubeadm.go:309] [api-check] The API server is healthy after 6.003166313s
	I0429 12:47:52.433395  870218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:47:52.450424  870218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:47:52.481163  870218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:47:52.481367  870218 kubeadm.go:309] [mark-control-plane] Marking the node ha-212075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:47:52.494539  870218 kubeadm.go:309] [bootstrap-token] Using token: oy0k5e.zul1f1ey7gnfr2ai
	I0429 12:47:52.496157  870218 out.go:204]   - Configuring RBAC rules ...
	I0429 12:47:52.496331  870218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:47:52.502259  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:47:52.511107  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:47:52.515407  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:47:52.518870  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:47:52.527162  870218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:47:52.824005  870218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:47:53.332771  870218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:47:53.824147  870218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:47:53.825228  870218 kubeadm.go:309] 
	I0429 12:47:53.825297  870218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:47:53.825327  870218 kubeadm.go:309] 
	I0429 12:47:53.825444  870218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:47:53.825457  870218 kubeadm.go:309] 
	I0429 12:47:53.825521  870218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:47:53.825634  870218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:47:53.825715  870218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:47:53.825748  870218 kubeadm.go:309] 
	I0429 12:47:53.825830  870218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:47:53.825842  870218 kubeadm.go:309] 
	I0429 12:47:53.825919  870218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:47:53.825928  870218 kubeadm.go:309] 
	I0429 12:47:53.826008  870218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:47:53.826118  870218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:47:53.826239  870218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:47:53.826252  870218 kubeadm.go:309] 
	I0429 12:47:53.826371  870218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:47:53.826475  870218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:47:53.826494  870218 kubeadm.go:309] 
	I0429 12:47:53.826613  870218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oy0k5e.zul1f1ey7gnfr2ai \
	I0429 12:47:53.826769  870218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 12:47:53.826807  870218 kubeadm.go:309] 	--control-plane 
	I0429 12:47:53.826817  870218 kubeadm.go:309] 
	I0429 12:47:53.826927  870218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:47:53.826938  870218 kubeadm.go:309] 
	I0429 12:47:53.827054  870218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oy0k5e.zul1f1ey7gnfr2ai \
	I0429 12:47:53.827204  870218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 12:47:53.827540  870218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:47:53.827637  870218 cni.go:84] Creating CNI manager for ""
	I0429 12:47:53.827661  870218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:47:53.829635  870218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:47:53.831034  870218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:47:53.837177  870218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:47:53.837201  870218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:47:53.857011  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:47:54.316710  870218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:47:54.316795  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:54.316822  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-212075 minikube.k8s.io/updated_at=2024_04_29T12_47_54_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=ha-212075 minikube.k8s.io/primary=true
	I0429 12:47:54.345797  870218 ops.go:34] apiserver oom_adj: -16
	I0429 12:47:54.496882  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:54.997852  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:55.497076  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:55.997178  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:56.497134  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:56.997042  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:57.497933  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:57.997275  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:58.496967  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:58.997579  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:59.497467  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:59.996968  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:00.497065  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:00.996897  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:01.497143  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:01.997850  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:02.497748  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:02.997561  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:03.497638  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:03.997996  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:04.497993  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:04.997647  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:05.497664  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:05.996988  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:06.497690  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:06.997830  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:07.106754  870218 kubeadm.go:1107] duration metric: took 12.790030758s to wait for elevateKubeSystemPrivileges
	W0429 12:48:07.106816  870218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:48:07.106828  870218 kubeadm.go:393] duration metric: took 24.446869371s to StartCluster
	I0429 12:48:07.106853  870218 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:07.106931  870218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:48:07.107757  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:07.108054  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:48:07.108076  870218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:48:07.108170  870218 addons.go:69] Setting storage-provisioner=true in profile "ha-212075"
	I0429 12:48:07.108059  870218 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:48:07.108227  870218 addons.go:234] Setting addon storage-provisioner=true in "ha-212075"
	I0429 12:48:07.108236  870218 start.go:240] waiting for startup goroutines ...
	I0429 12:48:07.108272  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:07.108274  870218 addons.go:69] Setting default-storageclass=true in profile "ha-212075"
	I0429 12:48:07.108298  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:07.108318  870218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-212075"
	I0429 12:48:07.108698  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.108711  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.108741  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.108818  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.125835  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0429 12:48:07.125855  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0429 12:48:07.126384  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.126392  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.126889  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.126907  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.127043  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.127070  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.127240  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.127469  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.127687  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:07.127850  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.127884  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.130130  870218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:48:07.130484  870218 kapi.go:59] client config for ha-212075: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt", KeyFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key", CAFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:48:07.131110  870218 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:48:07.131444  870218 addons.go:234] Setting addon default-storageclass=true in "ha-212075"
	I0429 12:48:07.131495  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:07.131891  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.131942  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.145217  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0429 12:48:07.145713  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.146291  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.146318  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.146674  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.146921  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:07.147889  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0429 12:48:07.148424  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.148984  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.149009  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.149027  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:07.151195  870218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:48:07.149400  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.151835  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.152779  870218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:48:07.152793  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:48:07.152798  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.152812  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:07.156024  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.156453  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:07.156481  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.156627  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:07.156875  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:07.157054  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:07.157212  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:07.169582  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0429 12:48:07.170085  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.170658  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.170689  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.171047  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.171293  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:07.172975  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:07.173320  870218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:48:07.173343  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:48:07.173366  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:07.176220  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.176648  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:07.176680  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.176863  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:07.177079  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:07.177269  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:07.177438  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:07.266153  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:48:07.412582  870218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:48:07.452705  870218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:48:08.095864  870218 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 12:48:08.460264  870218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.047632478s)
	I0429 12:48:08.460335  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460334  870218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.007583898s)
	I0429 12:48:08.460382  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460397  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.460349  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.460782  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.460798  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.460807  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460814  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.460874  870218 main.go:141] libmachine: (ha-212075) DBG | Closing plugin on server side
	I0429 12:48:08.460931  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.460951  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.460968  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460979  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.461022  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.461036  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.461284  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.461303  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.461463  870218 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:48:08.461474  870218 round_trippers.go:469] Request Headers:
	I0429 12:48:08.461491  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.461497  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:48:08.472509  870218 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 12:48:08.473158  870218 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:48:08.473177  870218 round_trippers.go:469] Request Headers:
	I0429 12:48:08.473184  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.473189  870218 round_trippers.go:473]     Content-Type: application/json
	I0429 12:48:08.473191  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:48:08.476542  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:08.476725  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.476745  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.477033  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.477052  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.477060  870218 main.go:141] libmachine: (ha-212075) DBG | Closing plugin on server side
	I0429 12:48:08.479794  870218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:48:08.481166  870218 addons.go:505] duration metric: took 1.373080963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:48:08.481216  870218 start.go:245] waiting for cluster config update ...
	I0429 12:48:08.481256  870218 start.go:254] writing updated cluster config ...
	I0429 12:48:08.482945  870218 out.go:177] 
	I0429 12:48:08.484429  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:08.484522  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:48:08.486369  870218 out.go:177] * Starting "ha-212075-m02" control-plane node in "ha-212075" cluster
	I0429 12:48:08.487634  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:48:08.487679  870218 cache.go:56] Caching tarball of preloaded images
	I0429 12:48:08.487783  870218 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:48:08.487798  870218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:48:08.487893  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:48:08.488101  870218 start.go:360] acquireMachinesLock for ha-212075-m02: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:48:08.488155  870218 start.go:364] duration metric: took 29.185µs to acquireMachinesLock for "ha-212075-m02"
	I0429 12:48:08.488180  870218 start.go:93] Provisioning new machine with config: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:48:08.488293  870218 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0429 12:48:08.489965  870218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:48:08.490069  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:08.490097  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:08.506743  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0429 12:48:08.507216  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:08.507765  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:08.507789  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:08.508188  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:08.508403  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:08.508606  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:08.508820  870218 start.go:159] libmachine.API.Create for "ha-212075" (driver="kvm2")
	I0429 12:48:08.508849  870218 client.go:168] LocalClient.Create starting
	I0429 12:48:08.508889  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 12:48:08.508932  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:48:08.508969  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:48:08.509048  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 12:48:08.509075  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:48:08.509094  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:48:08.509130  870218 main.go:141] libmachine: Running pre-create checks...
	I0429 12:48:08.509143  870218 main.go:141] libmachine: (ha-212075-m02) Calling .PreCreateCheck
	I0429 12:48:08.509327  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetConfigRaw
	I0429 12:48:08.509720  870218 main.go:141] libmachine: Creating machine...
	I0429 12:48:08.509736  870218 main.go:141] libmachine: (ha-212075-m02) Calling .Create
	I0429 12:48:08.509878  870218 main.go:141] libmachine: (ha-212075-m02) Creating KVM machine...
	I0429 12:48:08.511329  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found existing default KVM network
	I0429 12:48:08.511504  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found existing private KVM network mk-ha-212075
	I0429 12:48:08.511675  870218 main.go:141] libmachine: (ha-212075-m02) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02 ...
	I0429 12:48:08.511701  870218 main.go:141] libmachine: (ha-212075-m02) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:48:08.511754  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.511637  870647 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:48:08.511870  870218 main.go:141] libmachine: (ha-212075-m02) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:48:08.772309  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.772142  870647 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa...
	I0429 12:48:08.898179  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.898033  870647 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/ha-212075-m02.rawdisk...
	I0429 12:48:08.898251  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Writing magic tar header
	I0429 12:48:08.898270  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Writing SSH key tar header
	I0429 12:48:08.898287  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.898155  870647 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02 ...
	I0429 12:48:08.898330  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02
	I0429 12:48:08.898347  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 12:48:08.898361  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02 (perms=drwx------)
	I0429 12:48:08.898387  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:48:08.898401  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 12:48:08.898435  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 12:48:08.898449  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:48:08.898463  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 12:48:08.898477  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:48:08.898494  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:48:08.898504  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home
	I0429 12:48:08.898517  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Skipping /home - not owner
	I0429 12:48:08.898531  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:48:08.898551  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:48:08.898567  870218 main.go:141] libmachine: (ha-212075-m02) Creating domain...
	I0429 12:48:08.899629  870218 main.go:141] libmachine: (ha-212075-m02) define libvirt domain using xml: 
	I0429 12:48:08.899656  870218 main.go:141] libmachine: (ha-212075-m02) <domain type='kvm'>
	I0429 12:48:08.899663  870218 main.go:141] libmachine: (ha-212075-m02)   <name>ha-212075-m02</name>
	I0429 12:48:08.899669  870218 main.go:141] libmachine: (ha-212075-m02)   <memory unit='MiB'>2200</memory>
	I0429 12:48:08.899677  870218 main.go:141] libmachine: (ha-212075-m02)   <vcpu>2</vcpu>
	I0429 12:48:08.899684  870218 main.go:141] libmachine: (ha-212075-m02)   <features>
	I0429 12:48:08.899692  870218 main.go:141] libmachine: (ha-212075-m02)     <acpi/>
	I0429 12:48:08.899699  870218 main.go:141] libmachine: (ha-212075-m02)     <apic/>
	I0429 12:48:08.899709  870218 main.go:141] libmachine: (ha-212075-m02)     <pae/>
	I0429 12:48:08.899714  870218 main.go:141] libmachine: (ha-212075-m02)     
	I0429 12:48:08.899733  870218 main.go:141] libmachine: (ha-212075-m02)   </features>
	I0429 12:48:08.899744  870218 main.go:141] libmachine: (ha-212075-m02)   <cpu mode='host-passthrough'>
	I0429 12:48:08.899749  870218 main.go:141] libmachine: (ha-212075-m02)   
	I0429 12:48:08.899763  870218 main.go:141] libmachine: (ha-212075-m02)   </cpu>
	I0429 12:48:08.899774  870218 main.go:141] libmachine: (ha-212075-m02)   <os>
	I0429 12:48:08.899785  870218 main.go:141] libmachine: (ha-212075-m02)     <type>hvm</type>
	I0429 12:48:08.899794  870218 main.go:141] libmachine: (ha-212075-m02)     <boot dev='cdrom'/>
	I0429 12:48:08.899805  870218 main.go:141] libmachine: (ha-212075-m02)     <boot dev='hd'/>
	I0429 12:48:08.899842  870218 main.go:141] libmachine: (ha-212075-m02)     <bootmenu enable='no'/>
	I0429 12:48:08.899866  870218 main.go:141] libmachine: (ha-212075-m02)   </os>
	I0429 12:48:08.899892  870218 main.go:141] libmachine: (ha-212075-m02)   <devices>
	I0429 12:48:08.899912  870218 main.go:141] libmachine: (ha-212075-m02)     <disk type='file' device='cdrom'>
	I0429 12:48:08.899930  870218 main.go:141] libmachine: (ha-212075-m02)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/boot2docker.iso'/>
	I0429 12:48:08.899941  870218 main.go:141] libmachine: (ha-212075-m02)       <target dev='hdc' bus='scsi'/>
	I0429 12:48:08.899950  870218 main.go:141] libmachine: (ha-212075-m02)       <readonly/>
	I0429 12:48:08.899958  870218 main.go:141] libmachine: (ha-212075-m02)     </disk>
	I0429 12:48:08.899964  870218 main.go:141] libmachine: (ha-212075-m02)     <disk type='file' device='disk'>
	I0429 12:48:08.899973  870218 main.go:141] libmachine: (ha-212075-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:48:08.899981  870218 main.go:141] libmachine: (ha-212075-m02)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/ha-212075-m02.rawdisk'/>
	I0429 12:48:08.899992  870218 main.go:141] libmachine: (ha-212075-m02)       <target dev='hda' bus='virtio'/>
	I0429 12:48:08.900009  870218 main.go:141] libmachine: (ha-212075-m02)     </disk>
	I0429 12:48:08.900025  870218 main.go:141] libmachine: (ha-212075-m02)     <interface type='network'>
	I0429 12:48:08.900040  870218 main.go:141] libmachine: (ha-212075-m02)       <source network='mk-ha-212075'/>
	I0429 12:48:08.900051  870218 main.go:141] libmachine: (ha-212075-m02)       <model type='virtio'/>
	I0429 12:48:08.900062  870218 main.go:141] libmachine: (ha-212075-m02)     </interface>
	I0429 12:48:08.900072  870218 main.go:141] libmachine: (ha-212075-m02)     <interface type='network'>
	I0429 12:48:08.900081  870218 main.go:141] libmachine: (ha-212075-m02)       <source network='default'/>
	I0429 12:48:08.900092  870218 main.go:141] libmachine: (ha-212075-m02)       <model type='virtio'/>
	I0429 12:48:08.900107  870218 main.go:141] libmachine: (ha-212075-m02)     </interface>
	I0429 12:48:08.900126  870218 main.go:141] libmachine: (ha-212075-m02)     <serial type='pty'>
	I0429 12:48:08.900139  870218 main.go:141] libmachine: (ha-212075-m02)       <target port='0'/>
	I0429 12:48:08.900149  870218 main.go:141] libmachine: (ha-212075-m02)     </serial>
	I0429 12:48:08.900173  870218 main.go:141] libmachine: (ha-212075-m02)     <console type='pty'>
	I0429 12:48:08.900184  870218 main.go:141] libmachine: (ha-212075-m02)       <target type='serial' port='0'/>
	I0429 12:48:08.900192  870218 main.go:141] libmachine: (ha-212075-m02)     </console>
	I0429 12:48:08.900203  870218 main.go:141] libmachine: (ha-212075-m02)     <rng model='virtio'>
	I0429 12:48:08.900216  870218 main.go:141] libmachine: (ha-212075-m02)       <backend model='random'>/dev/random</backend>
	I0429 12:48:08.900225  870218 main.go:141] libmachine: (ha-212075-m02)     </rng>
	I0429 12:48:08.900234  870218 main.go:141] libmachine: (ha-212075-m02)     
	I0429 12:48:08.900242  870218 main.go:141] libmachine: (ha-212075-m02)     
	I0429 12:48:08.900252  870218 main.go:141] libmachine: (ha-212075-m02)   </devices>
	I0429 12:48:08.900259  870218 main.go:141] libmachine: (ha-212075-m02) </domain>
	I0429 12:48:08.900268  870218 main.go:141] libmachine: (ha-212075-m02) 
	I0429 12:48:08.907946  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:a9:53:79 in network default
	I0429 12:48:08.908582  870218 main.go:141] libmachine: (ha-212075-m02) Ensuring networks are active...
	I0429 12:48:08.908607  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:08.909355  870218 main.go:141] libmachine: (ha-212075-m02) Ensuring network default is active
	I0429 12:48:08.909634  870218 main.go:141] libmachine: (ha-212075-m02) Ensuring network mk-ha-212075 is active
	I0429 12:48:08.910063  870218 main.go:141] libmachine: (ha-212075-m02) Getting domain xml...
	I0429 12:48:08.910889  870218 main.go:141] libmachine: (ha-212075-m02) Creating domain...
	I0429 12:48:10.185813  870218 main.go:141] libmachine: (ha-212075-m02) Waiting to get IP...
	I0429 12:48:10.186939  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:10.187509  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:10.187547  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:10.187468  870647 retry.go:31] will retry after 301.578397ms: waiting for machine to come up
	I0429 12:48:10.491341  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:10.491897  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:10.491932  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:10.491839  870647 retry.go:31] will retry after 321.98325ms: waiting for machine to come up
	I0429 12:48:10.815451  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:10.815808  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:10.815832  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:10.815763  870647 retry.go:31] will retry after 394.050947ms: waiting for machine to come up
	I0429 12:48:11.211473  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:11.211909  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:11.211942  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:11.211849  870647 retry.go:31] will retry after 430.51973ms: waiting for machine to come up
	I0429 12:48:11.644676  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:11.645219  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:11.645248  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:11.645148  870647 retry.go:31] will retry after 709.605764ms: waiting for machine to come up
	I0429 12:48:12.356069  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:12.356525  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:12.356593  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:12.356473  870647 retry.go:31] will retry after 890.075621ms: waiting for machine to come up
	I0429 12:48:13.248841  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:13.249370  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:13.249406  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:13.249304  870647 retry.go:31] will retry after 727.943001ms: waiting for machine to come up
	I0429 12:48:13.978718  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:13.979281  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:13.979316  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:13.979215  870647 retry.go:31] will retry after 945.901335ms: waiting for machine to come up
	I0429 12:48:14.926762  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:14.927398  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:14.927432  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:14.927328  870647 retry.go:31] will retry after 1.459605646s: waiting for machine to come up
	I0429 12:48:16.388522  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:16.388934  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:16.388959  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:16.388887  870647 retry.go:31] will retry after 1.569864244s: waiting for machine to come up
	I0429 12:48:17.960898  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:17.961469  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:17.961500  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:17.961424  870647 retry.go:31] will retry after 2.113218061s: waiting for machine to come up
	I0429 12:48:20.078292  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:20.078741  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:20.078768  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:20.078698  870647 retry.go:31] will retry after 2.352898738s: waiting for machine to come up
	I0429 12:48:22.434312  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:22.434768  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:22.434792  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:22.434720  870647 retry.go:31] will retry after 4.188987093s: waiting for machine to come up
	I0429 12:48:26.627589  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:26.628066  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:26.628098  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:26.628002  870647 retry.go:31] will retry after 4.959414999s: waiting for machine to come up
	I0429 12:48:31.590773  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.591277  870218 main.go:141] libmachine: (ha-212075-m02) Found IP for machine: 192.168.39.36
	I0429 12:48:31.591299  870218 main.go:141] libmachine: (ha-212075-m02) Reserving static IP address...
	I0429 12:48:31.591312  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has current primary IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.591748  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find host DHCP lease matching {name: "ha-212075-m02", mac: "52:54:00:46:f4:9a", ip: "192.168.39.36"} in network mk-ha-212075
	I0429 12:48:31.678248  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Getting to WaitForSSH function...
	I0429 12:48:31.678313  870218 main.go:141] libmachine: (ha-212075-m02) Reserved static IP address: 192.168.39.36
	I0429 12:48:31.678330  870218 main.go:141] libmachine: (ha-212075-m02) Waiting for SSH to be available...
	I0429 12:48:31.681502  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.682128  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:31.682159  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.682371  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Using SSH client type: external
	I0429 12:48:31.682397  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa (-rw-------)
	I0429 12:48:31.682431  870218 main.go:141] libmachine: (ha-212075-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:48:31.682456  870218 main.go:141] libmachine: (ha-212075-m02) DBG | About to run SSH command:
	I0429 12:48:31.682464  870218 main.go:141] libmachine: (ha-212075-m02) DBG | exit 0
	I0429 12:48:31.811773  870218 main.go:141] libmachine: (ha-212075-m02) DBG | SSH cmd err, output: <nil>: 
	I0429 12:48:31.812004  870218 main.go:141] libmachine: (ha-212075-m02) KVM machine creation complete!
	I0429 12:48:31.812373  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetConfigRaw
	I0429 12:48:31.812982  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:31.813232  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:31.813485  870218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:48:31.813500  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:48:31.814808  870218 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:48:31.814824  870218 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:48:31.814830  870218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:48:31.814836  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:31.817101  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.817490  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:31.817519  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.817650  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:31.817845  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.818046  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.818236  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:31.818444  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:31.818670  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:31.818681  870218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:48:31.935100  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:48:31.935130  870218 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:48:31.935138  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:31.938171  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.938493  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:31.938527  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.938645  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:31.938880  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.939058  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.939226  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:31.939407  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:31.939600  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:31.939613  870218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:48:32.053082  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:48:32.053177  870218 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:48:32.053189  870218 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:48:32.053198  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:32.053466  870218 buildroot.go:166] provisioning hostname "ha-212075-m02"
	I0429 12:48:32.053493  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:32.053731  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.056710  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.057187  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.057213  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.057373  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.057590  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.057787  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.057940  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.058097  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:32.058291  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:32.058303  870218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075-m02 && echo "ha-212075-m02" | sudo tee /etc/hostname
	I0429 12:48:32.186743  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075-m02
	
	I0429 12:48:32.186780  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.190004  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.190426  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.190458  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.190737  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.190924  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.191144  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.191355  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.191580  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:32.191774  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:32.191799  870218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:48:32.313526  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:48:32.313572  870218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:48:32.313594  870218 buildroot.go:174] setting up certificates
	I0429 12:48:32.313607  870218 provision.go:84] configureAuth start
	I0429 12:48:32.313644  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:32.314022  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:32.316977  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.317366  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.317400  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.317566  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.319834  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.320185  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.320221  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.320393  870218 provision.go:143] copyHostCerts
	I0429 12:48:32.320430  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:48:32.320465  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:48:32.320475  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:48:32.320540  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:48:32.320633  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:48:32.320657  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:48:32.320664  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:48:32.320689  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:48:32.320743  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:48:32.320761  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:48:32.320767  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:48:32.320792  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:48:32.320890  870218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075-m02 san=[127.0.0.1 192.168.39.36 ha-212075-m02 localhost minikube]
	I0429 12:48:32.428710  870218 provision.go:177] copyRemoteCerts
	I0429 12:48:32.428786  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:48:32.428817  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.431477  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.431790  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.431816  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.431990  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.432223  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.432417  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.432560  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:32.523038  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:48:32.523114  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:48:32.552778  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:48:32.552860  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:48:32.582395  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:48:32.582487  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:48:32.611276  870218 provision.go:87] duration metric: took 297.652353ms to configureAuth
	I0429 12:48:32.611312  870218 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:48:32.611552  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:32.611642  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.614288  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.614655  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.614690  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.614928  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.615179  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.615442  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.615591  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.615741  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:32.615994  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:32.616016  870218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:48:32.889700  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:48:32.889741  870218 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:48:32.889754  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetURL
	I0429 12:48:32.891136  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Using libvirt version 6000000
	I0429 12:48:32.893428  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.893833  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.893868  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.894083  870218 main.go:141] libmachine: Docker is up and running!
	I0429 12:48:32.894098  870218 main.go:141] libmachine: Reticulating splines...
	I0429 12:48:32.894106  870218 client.go:171] duration metric: took 24.3852485s to LocalClient.Create
	I0429 12:48:32.894135  870218 start.go:167] duration metric: took 24.385317751s to libmachine.API.Create "ha-212075"
	I0429 12:48:32.894148  870218 start.go:293] postStartSetup for "ha-212075-m02" (driver="kvm2")
	I0429 12:48:32.894162  870218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:48:32.894212  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:32.894482  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:48:32.894509  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.896782  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.897183  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.897213  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.897359  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.897544  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.897690  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.897796  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:32.982542  870218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:48:32.987133  870218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:48:32.987166  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:48:32.987242  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:48:32.987334  870218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:48:32.987350  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:48:32.987493  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:48:32.997640  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:48:33.023959  870218 start.go:296] duration metric: took 129.789656ms for postStartSetup
	I0429 12:48:33.024034  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetConfigRaw
	I0429 12:48:33.024677  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:33.027566  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.028047  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.028090  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.028348  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:48:33.028616  870218 start.go:128] duration metric: took 24.540303032s to createHost
	I0429 12:48:33.028651  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:33.031168  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.031576  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.031604  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.031827  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:33.032054  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.032225  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.032390  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:33.032628  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:33.032815  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:33.032826  870218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:48:33.145647  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394913.118554750
	
	I0429 12:48:33.145686  870218 fix.go:216] guest clock: 1714394913.118554750
	I0429 12:48:33.145722  870218 fix.go:229] Guest: 2024-04-29 12:48:33.11855475 +0000 UTC Remote: 2024-04-29 12:48:33.028632996 +0000 UTC m=+82.403785948 (delta=89.921754ms)
	I0429 12:48:33.145751  870218 fix.go:200] guest clock delta is within tolerance: 89.921754ms
	I0429 12:48:33.145760  870218 start.go:83] releasing machines lock for "ha-212075-m02", held for 24.657592144s
	I0429 12:48:33.145790  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.146156  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:33.149182  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.149826  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.149921  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.152272  870218 out.go:177] * Found network options:
	I0429 12:48:33.153616  870218 out.go:177]   - NO_PROXY=192.168.39.97
	W0429 12:48:33.154781  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:48:33.154814  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.155526  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.155717  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.155839  870218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:48:33.155888  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	W0429 12:48:33.155947  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:48:33.156040  870218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:48:33.156068  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:33.159109  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159175  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159607  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.159639  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159705  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.159739  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159843  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:33.160130  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:33.160233  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.160299  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.160394  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:33.160462  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:33.160543  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:33.160577  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:33.408285  870218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:48:33.415387  870218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:48:33.415464  870218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:48:33.433502  870218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:48:33.433530  870218 start.go:494] detecting cgroup driver to use...
	I0429 12:48:33.433612  870218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:48:33.450404  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:48:33.466440  870218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:48:33.466506  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:48:33.482112  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:48:33.497775  870218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:48:33.618172  870218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:48:33.775113  870218 docker.go:233] disabling docker service ...
	I0429 12:48:33.775228  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:48:33.791054  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:48:33.805551  870218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:48:33.929373  870218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:48:34.046045  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:48:34.062693  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:48:34.082723  870218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:48:34.082828  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.095496  870218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:48:34.095585  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.107376  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.119146  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.130938  870218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:48:34.143454  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.155583  870218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.174540  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.188124  870218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:48:34.200241  870218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:48:34.200313  870218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:48:34.215862  870218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:48:34.226798  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:48:34.345854  870218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:48:34.493199  870218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:48:34.493282  870218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:48:34.498665  870218 start.go:562] Will wait 60s for crictl version
	I0429 12:48:34.498737  870218 ssh_runner.go:195] Run: which crictl
	I0429 12:48:34.502609  870218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:48:34.546513  870218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:48:34.546611  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:48:34.577592  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:48:34.610836  870218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:48:34.612460  870218 out.go:177]   - env NO_PROXY=192.168.39.97
	I0429 12:48:34.614047  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:34.617088  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:34.617431  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:34.617464  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:34.617681  870218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:48:34.622595  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:48:34.637467  870218 mustload.go:65] Loading cluster: ha-212075
	I0429 12:48:34.637709  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:34.638083  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:34.638119  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:34.655335  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0429 12:48:34.655921  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:34.656505  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:34.656533  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:34.656949  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:34.657168  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:34.658862  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:34.659208  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:34.659241  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:34.676178  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0429 12:48:34.676652  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:34.677199  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:34.677225  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:34.677649  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:34.677907  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:34.678108  870218 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.36
	I0429 12:48:34.678120  870218 certs.go:194] generating shared ca certs ...
	I0429 12:48:34.678137  870218 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:34.678279  870218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:48:34.678315  870218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:48:34.678324  870218 certs.go:256] generating profile certs ...
	I0429 12:48:34.678399  870218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:48:34.678425  870218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885
	I0429 12:48:34.678440  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.36 192.168.39.254]
	I0429 12:48:34.805493  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885 ...
	I0429 12:48:34.805534  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885: {Name:mkd1806caee1a077a46115403308bba9c5b89af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:34.805721  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885 ...
	I0429 12:48:34.805735  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885: {Name:mk269ebb85b90f5fc58a4363fce8b015ee69584d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:34.805806  870218 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:48:34.805933  870218 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:48:34.806072  870218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:48:34.806091  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:48:34.806103  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:48:34.806117  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:48:34.806130  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:48:34.806140  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:48:34.806150  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:48:34.806159  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:48:34.806171  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:48:34.806219  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:48:34.806252  870218 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:48:34.806262  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:48:34.806284  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:48:34.806309  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:48:34.806339  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:48:34.806399  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:48:34.806444  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:34.806466  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:48:34.806484  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:48:34.806530  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:34.809733  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:34.810136  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:34.810161  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:34.810406  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:34.810632  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:34.810800  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:34.810968  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:34.887807  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 12:48:34.893360  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 12:48:34.905730  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 12:48:34.910714  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 12:48:34.925811  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 12:48:34.930751  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 12:48:34.942482  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 12:48:34.947167  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 12:48:34.959068  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 12:48:34.964374  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 12:48:34.977275  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 12:48:34.986440  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 12:48:35.002753  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:48:35.033535  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:48:35.060599  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:48:35.086939  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:48:35.113656  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 12:48:35.140419  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 12:48:35.167085  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:48:35.193420  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:48:35.218968  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:48:35.244325  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:48:35.269718  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:48:35.295519  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 12:48:35.314438  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 12:48:35.333492  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 12:48:35.351305  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 12:48:35.368872  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 12:48:35.386640  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 12:48:35.404506  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 12:48:35.422626  870218 ssh_runner.go:195] Run: openssl version
	I0429 12:48:35.429174  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:48:35.441867  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:48:35.446719  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:48:35.446818  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:48:35.454462  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:48:35.466704  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:48:35.479054  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:35.484682  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:35.484764  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:35.490502  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:48:35.502670  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:48:35.514732  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:48:35.519580  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:48:35.519657  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:48:35.525715  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:48:35.537965  870218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:48:35.542342  870218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:48:35.542400  870218 kubeadm.go:928] updating node {m02 192.168.39.36 8443 v1.30.0 crio true true} ...
	I0429 12:48:35.542490  870218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:48:35.542526  870218 kube-vip.go:111] generating kube-vip config ...
	I0429 12:48:35.542581  870218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:48:35.561058  870218 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:48:35.561143  870218 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:48:35.561210  870218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:48:35.574246  870218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 12:48:35.574321  870218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 12:48:35.585778  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 12:48:35.585815  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:48:35.585899  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:48:35.585897  870218 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0429 12:48:35.585912  870218 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0429 12:48:35.590788  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:48:35.590829  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 12:48:36.224127  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:48:36.224220  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:48:36.231982  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:48:36.232023  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 12:48:36.568380  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:48:36.585896  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:48:36.586003  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:48:36.590713  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:48:36.590749  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 12:48:37.056796  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 12:48:37.067649  870218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 12:48:37.088488  870218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:48:37.108016  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 12:48:37.127116  870218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:48:37.131646  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:48:37.146577  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:48:37.271454  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:48:37.288942  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:37.289340  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:37.289389  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:37.305131  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I0429 12:48:37.305662  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:37.306176  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:37.306198  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:37.306568  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:37.306815  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:37.306984  870218 start.go:316] joinCluster: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:48:37.307089  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 12:48:37.307111  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:37.310630  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:37.311149  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:37.311190  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:37.311387  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:37.311590  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:37.311753  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:37.311911  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:37.495514  870218 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:48:37.495571  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s07sl2.kup7dzd4wu3ttqwx --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m02 --control-plane --apiserver-advertise-address=192.168.39.36 --apiserver-bind-port=8443"
	I0429 12:49:02.049323  870218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s07sl2.kup7dzd4wu3ttqwx --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m02 --control-plane --apiserver-advertise-address=192.168.39.36 --apiserver-bind-port=8443": (24.553721774s)
	I0429 12:49:02.049397  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 12:49:02.558233  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-212075-m02 minikube.k8s.io/updated_at=2024_04_29T12_49_02_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=ha-212075 minikube.k8s.io/primary=false
	I0429 12:49:02.732186  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-212075-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 12:49:02.854666  870218 start.go:318] duration metric: took 25.547674937s to joinCluster
	I0429 12:49:02.854762  870218 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:49:02.856131  870218 out.go:177] * Verifying Kubernetes components...
	I0429 12:49:02.855106  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:02.857470  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:49:03.067090  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:49:03.085215  870218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:49:03.085546  870218 kapi.go:59] client config for ha-212075: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt", KeyFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key", CAFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 12:49:03.085629  870218 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0429 12:49:03.085948  870218 node_ready.go:35] waiting up to 6m0s for node "ha-212075-m02" to be "Ready" ...
	I0429 12:49:03.086041  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:03.086049  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:03.086057  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:03.086062  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:03.096961  870218 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 12:49:03.586849  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:03.586877  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:03.586887  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:03.586894  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:03.590813  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:04.086875  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:04.086906  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:04.086922  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:04.086929  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:04.118136  870218 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 12:49:04.586198  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:04.586239  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:04.586248  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:04.586253  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:04.590710  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:05.087118  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:05.087150  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:05.087158  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:05.087162  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:05.090865  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:05.091510  870218 node_ready.go:53] node "ha-212075-m02" has status "Ready":"False"
	I0429 12:49:05.586466  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:05.586495  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:05.586506  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:05.586511  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:05.591095  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:06.087044  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:06.087077  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:06.087093  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:06.087099  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:06.091377  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:06.587078  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:06.587107  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:06.587116  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:06.587121  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:06.592788  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:07.086787  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:07.086820  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:07.086831  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:07.086839  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:07.091631  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:07.092318  870218 node_ready.go:53] node "ha-212075-m02" has status "Ready":"False"
	I0429 12:49:07.586426  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:07.586461  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:07.586474  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:07.586481  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:07.593188  870218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:49:08.087222  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:08.087249  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.087260  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.087265  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.219961  870218 round_trippers.go:574] Response Status: 200 OK in 132 milliseconds
	I0429 12:49:08.220604  870218 node_ready.go:49] node "ha-212075-m02" has status "Ready":"True"
	I0429 12:49:08.220632  870218 node_ready.go:38] duration metric: took 5.134660973s for node "ha-212075-m02" to be "Ready" ...
	I0429 12:49:08.220646  870218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:49:08.220738  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:08.220753  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.220764  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.220773  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.229060  870218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:49:08.235147  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.235258  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2t8g
	I0429 12:49:08.235267  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.235275  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.235281  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.314925  870218 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0429 12:49:08.315803  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:08.315828  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.315840  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.315847  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.329212  870218 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 12:49:08.329863  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:08.329897  870218 pod_ready.go:81] duration metric: took 94.712953ms for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.329913  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.330038  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x299s
	I0429 12:49:08.330053  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.330064  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.330072  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.333533  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.334362  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:08.334381  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.334391  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.334398  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.338012  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.338971  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:08.338995  870218 pod_ready.go:81] duration metric: took 9.067885ms for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.339012  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.339105  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075
	I0429 12:49:08.339117  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.339128  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.339137  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.342835  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.343473  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:08.343496  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.343507  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.343516  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.349400  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:08.349955  870218 pod_ready.go:92] pod "etcd-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:08.349979  870218 pod_ready.go:81] duration metric: took 10.955166ms for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.349992  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.350072  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:08.350082  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.350093  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.350099  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.353467  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.354125  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:08.354146  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.354156  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.354163  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.357021  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:08.850289  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:08.850319  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.850331  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.850336  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.854284  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.854924  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:08.854948  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.854958  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.854963  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.858066  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:09.351005  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:09.351035  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.351057  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.351065  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.354889  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:09.356025  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:09.356044  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.356052  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.356057  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.359159  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:09.850927  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:09.850957  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.850970  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.850983  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.856731  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:09.857966  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:09.857984  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.857992  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.857998  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.860636  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:10.350922  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:10.350956  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.350966  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.350971  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.355448  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:10.356559  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:10.356578  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.356587  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.356591  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.360394  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:10.361486  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:10.850899  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:10.850929  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.850944  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.850949  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.855382  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:10.856870  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:10.856902  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.856914  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.856920  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.861517  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:11.350799  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:11.350827  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.350834  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.350838  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.354605  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:11.355335  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:11.355353  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.355378  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.355383  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.359577  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:11.851128  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:11.851157  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.851168  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.851173  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.855182  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:11.856109  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:11.856128  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.856138  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.856143  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.859010  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:12.350392  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:12.350424  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.350432  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.350436  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.354740  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:12.355746  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:12.355765  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.355773  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.355777  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.358545  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:12.850371  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:12.850402  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.850412  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.850418  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.854752  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:12.855683  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:12.855700  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.855711  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.855718  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.858834  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:12.859529  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:13.351114  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:13.351145  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.351153  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.351156  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.355082  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:13.356231  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:13.356252  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.356259  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.356264  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.359839  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:13.850437  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:13.850465  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.850473  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.850477  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.855157  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:13.855992  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:13.856011  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.856018  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.856022  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.859036  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:14.351293  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:14.351330  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.351340  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.351345  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.356455  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:14.357250  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:14.357269  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.357277  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.357282  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.361121  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:14.850961  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:14.851003  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.851016  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.851022  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.855394  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:14.856176  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:14.856195  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.856202  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.856205  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.859738  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:14.860328  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:15.350798  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:15.350830  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.350841  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.350845  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.355817  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:15.357189  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:15.357221  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.357230  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.357246  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.361330  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:15.850942  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:15.850977  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.850986  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.850989  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.854870  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:15.855657  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:15.855678  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.855686  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.855690  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.859126  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:16.351154  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:16.351183  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.351191  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.351196  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.359437  870218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:49:16.360301  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:16.360324  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.360336  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.360342  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.363465  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:16.850422  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:16.850456  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.850466  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.850471  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.855565  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:16.856378  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:16.856396  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.856404  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.856409  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.860272  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:16.860861  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:17.350900  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:17.350927  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.350935  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.350938  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.355909  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:17.356733  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:17.356756  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.356767  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.356773  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.360272  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:17.850305  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:17.850332  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.850340  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.850345  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.854785  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:17.855453  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:17.855469  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.855477  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.855482  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.858541  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.350427  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:18.350455  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.350468  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.350474  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.354383  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.354996  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.355013  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.355021  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.355025  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.358793  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.851129  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:18.851157  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.851166  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.851171  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.855430  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:18.856095  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.856112  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.856120  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.856123  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.859800  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.860384  870218 pod_ready.go:92] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.860405  870218 pod_ready.go:81] duration metric: took 10.5104068s for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.860424  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.860487  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075
	I0429 12:49:18.860495  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.860502  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.860507  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.863876  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.864907  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:18.864922  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.864930  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.864933  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.868515  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.869138  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.869156  870218 pod_ready.go:81] duration metric: took 8.723811ms for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.869166  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.869238  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075-m02
	I0429 12:49:18.869246  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.869254  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.869261  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.872567  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.873514  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.873527  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.873535  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.873543  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.876640  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.877214  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.877232  870218 pod_ready.go:81] duration metric: took 8.058402ms for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.877242  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.877307  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075
	I0429 12:49:18.877316  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.877322  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.877328  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.880468  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.881203  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:18.881224  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.881235  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.881241  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.884107  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:18.885148  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.885169  870218 pod_ready.go:81] duration metric: took 7.919576ms for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.885180  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.885305  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m02
	I0429 12:49:18.885316  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.885323  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.885328  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.888193  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:18.888830  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.888845  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.888856  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.888861  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.892058  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.892610  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.892627  870218 pod_ready.go:81] duration metric: took 7.442078ms for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.892638  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.052102  870218 request.go:629] Waited for 159.379221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:49:19.052177  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:49:19.052184  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.052194  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.052199  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.061885  870218 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:49:19.252140  870218 request.go:629] Waited for 189.369987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:19.252224  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:19.252232  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.252243  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.252261  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.255881  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:19.256791  870218 pod_ready.go:92] pod "kube-proxy-ncdsk" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:19.256814  870218 pod_ready.go:81] duration metric: took 364.169187ms for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.256826  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.451933  870218 request.go:629] Waited for 195.014453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:49:19.452030  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:49:19.452042  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.452054  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.452062  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.455910  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:19.652261  870218 request.go:629] Waited for 195.417524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:19.652340  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:19.652348  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.652359  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.652364  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.656950  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:19.657658  870218 pod_ready.go:92] pod "kube-proxy-sfmhh" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:19.657680  870218 pod_ready.go:81] duration metric: took 400.848571ms for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.657691  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.851979  870218 request.go:629] Waited for 194.202023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:49:19.852092  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:49:19.852100  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.852111  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.852117  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.856054  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:20.051273  870218 request.go:629] Waited for 194.323385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:20.051405  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:20.051419  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.051427  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.051431  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.055030  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:20.055579  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:20.055603  870218 pod_ready.go:81] duration metric: took 397.905703ms for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:20.055614  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:20.251711  870218 request.go:629] Waited for 196.011102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:49:20.251809  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:49:20.251817  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.251838  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.251857  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.256067  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:20.451164  870218 request.go:629] Waited for 194.30553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:20.451272  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:20.451280  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.451291  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.451297  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.454603  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:20.455129  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:20.455160  870218 pod_ready.go:81] duration metric: took 399.537636ms for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:20.455176  870218 pod_ready.go:38] duration metric: took 12.234512578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:49:20.455196  870218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:49:20.455270  870218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:49:20.472016  870218 api_server.go:72] duration metric: took 17.617201161s to wait for apiserver process to appear ...
	I0429 12:49:20.472049  870218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:49:20.472071  870218 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0429 12:49:20.478128  870218 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0429 12:49:20.478206  870218 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0429 12:49:20.478214  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.478222  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.478229  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.479205  870218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0429 12:49:20.479326  870218 api_server.go:141] control plane version: v1.30.0
	I0429 12:49:20.479348  870218 api_server.go:131] duration metric: took 7.292177ms to wait for apiserver health ...
	I0429 12:49:20.479376  870218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:49:20.651833  870218 request.go:629] Waited for 172.3703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:20.651916  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:20.651921  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.651930  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.651933  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.658972  870218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:49:20.663967  870218 system_pods.go:59] 17 kube-system pods found
	I0429 12:49:20.664028  870218 system_pods.go:61] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:49:20.664035  870218 system_pods.go:61] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:49:20.664039  870218 system_pods.go:61] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:49:20.664043  870218 system_pods.go:61] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:49:20.664046  870218 system_pods.go:61] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:49:20.664049  870218 system_pods.go:61] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:49:20.664052  870218 system_pods.go:61] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:49:20.664058  870218 system_pods.go:61] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:49:20.664061  870218 system_pods.go:61] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:49:20.664066  870218 system_pods.go:61] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:49:20.664069  870218 system_pods.go:61] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:49:20.664074  870218 system_pods.go:61] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:49:20.664078  870218 system_pods.go:61] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:49:20.664082  870218 system_pods.go:61] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:49:20.664087  870218 system_pods.go:61] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:49:20.664090  870218 system_pods.go:61] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:49:20.664092  870218 system_pods.go:61] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:49:20.664099  870218 system_pods.go:74] duration metric: took 184.712298ms to wait for pod list to return data ...
	I0429 12:49:20.664110  870218 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:49:20.851865  870218 request.go:629] Waited for 187.634894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:49:20.851947  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:49:20.851952  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.851960  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.851965  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.856481  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:20.856703  870218 default_sa.go:45] found service account: "default"
	I0429 12:49:20.856718  870218 default_sa.go:55] duration metric: took 192.601184ms for default service account to be created ...
	I0429 12:49:20.856727  870218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:49:21.052256  870218 request.go:629] Waited for 195.417866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:21.052333  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:21.052351  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:21.052361  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:21.052366  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:21.057947  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:21.062642  870218 system_pods.go:86] 17 kube-system pods found
	I0429 12:49:21.062687  870218 system_pods.go:89] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:49:21.062696  870218 system_pods.go:89] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:49:21.062703  870218 system_pods.go:89] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:49:21.062708  870218 system_pods.go:89] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:49:21.062714  870218 system_pods.go:89] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:49:21.062720  870218 system_pods.go:89] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:49:21.062727  870218 system_pods.go:89] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:49:21.062733  870218 system_pods.go:89] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:49:21.062739  870218 system_pods.go:89] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:49:21.062746  870218 system_pods.go:89] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:49:21.062757  870218 system_pods.go:89] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:49:21.062765  870218 system_pods.go:89] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:49:21.062774  870218 system_pods.go:89] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:49:21.062782  870218 system_pods.go:89] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:49:21.062792  870218 system_pods.go:89] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:49:21.062800  870218 system_pods.go:89] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:49:21.062806  870218 system_pods.go:89] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:49:21.062820  870218 system_pods.go:126] duration metric: took 206.083067ms to wait for k8s-apps to be running ...
	I0429 12:49:21.062833  870218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:49:21.062894  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:49:21.080250  870218 system_svc.go:56] duration metric: took 17.405204ms WaitForService to wait for kubelet
	I0429 12:49:21.080292  870218 kubeadm.go:576] duration metric: took 18.225480527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:49:21.080313  870218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:49:21.251755  870218 request.go:629] Waited for 171.363431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0429 12:49:21.251820  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0429 12:49:21.251825  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:21.251832  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:21.251837  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:21.258283  870218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:49:21.259370  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:49:21.259409  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:49:21.259433  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:49:21.259439  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:49:21.259446  870218 node_conditions.go:105] duration metric: took 179.126951ms to run NodePressure ...
	I0429 12:49:21.259466  870218 start.go:240] waiting for startup goroutines ...
	I0429 12:49:21.259550  870218 start.go:254] writing updated cluster config ...
	I0429 12:49:21.261846  870218 out.go:177] 
	I0429 12:49:21.263452  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:21.263575  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:49:21.265353  870218 out.go:177] * Starting "ha-212075-m03" control-plane node in "ha-212075" cluster
	I0429 12:49:21.266472  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:49:21.266506  870218 cache.go:56] Caching tarball of preloaded images
	I0429 12:49:21.266623  870218 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:49:21.266635  870218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:49:21.266802  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:49:21.267040  870218 start.go:360] acquireMachinesLock for ha-212075-m03: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:49:21.267097  870218 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "ha-212075-m03"
	I0429 12:49:21.267118  870218 start.go:93] Provisioning new machine with config: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:49:21.267234  870218 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0429 12:49:21.268814  870218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:49:21.268990  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:21.269033  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:21.286412  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0429 12:49:21.286903  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:21.287420  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:21.287439  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:21.287775  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:21.288007  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:21.288192  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:21.288372  870218 start.go:159] libmachine.API.Create for "ha-212075" (driver="kvm2")
	I0429 12:49:21.288401  870218 client.go:168] LocalClient.Create starting
	I0429 12:49:21.288434  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 12:49:21.288469  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:49:21.288485  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:49:21.288542  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 12:49:21.288560  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:49:21.288570  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:49:21.288584  870218 main.go:141] libmachine: Running pre-create checks...
	I0429 12:49:21.288592  870218 main.go:141] libmachine: (ha-212075-m03) Calling .PreCreateCheck
	I0429 12:49:21.288811  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetConfigRaw
	I0429 12:49:21.289218  870218 main.go:141] libmachine: Creating machine...
	I0429 12:49:21.289234  870218 main.go:141] libmachine: (ha-212075-m03) Calling .Create
	I0429 12:49:21.289387  870218 main.go:141] libmachine: (ha-212075-m03) Creating KVM machine...
	I0429 12:49:21.291003  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found existing default KVM network
	I0429 12:49:21.291207  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found existing private KVM network mk-ha-212075
	I0429 12:49:21.291323  870218 main.go:141] libmachine: (ha-212075-m03) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03 ...
	I0429 12:49:21.291379  870218 main.go:141] libmachine: (ha-212075-m03) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:49:21.291474  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.291312  871030 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:49:21.291566  870218 main.go:141] libmachine: (ha-212075-m03) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:49:21.553727  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.553607  871030 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa...
	I0429 12:49:21.655477  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.655312  871030 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/ha-212075-m03.rawdisk...
	I0429 12:49:21.655512  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Writing magic tar header
	I0429 12:49:21.655527  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Writing SSH key tar header
	I0429 12:49:21.655537  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.655481  871030 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03 ...
	I0429 12:49:21.655661  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03
	I0429 12:49:21.655697  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03 (perms=drwx------)
	I0429 12:49:21.655710  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 12:49:21.655734  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:49:21.655747  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 12:49:21.655763  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:49:21.655775  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:49:21.655790  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home
	I0429 12:49:21.655852  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Skipping /home - not owner
	I0429 12:49:21.655872  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:49:21.655886  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 12:49:21.655898  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 12:49:21.655912  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:49:21.655924  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:49:21.655938  870218 main.go:141] libmachine: (ha-212075-m03) Creating domain...
	I0429 12:49:21.656952  870218 main.go:141] libmachine: (ha-212075-m03) define libvirt domain using xml: 
	I0429 12:49:21.656983  870218 main.go:141] libmachine: (ha-212075-m03) <domain type='kvm'>
	I0429 12:49:21.656994  870218 main.go:141] libmachine: (ha-212075-m03)   <name>ha-212075-m03</name>
	I0429 12:49:21.657006  870218 main.go:141] libmachine: (ha-212075-m03)   <memory unit='MiB'>2200</memory>
	I0429 12:49:21.657019  870218 main.go:141] libmachine: (ha-212075-m03)   <vcpu>2</vcpu>
	I0429 12:49:21.657028  870218 main.go:141] libmachine: (ha-212075-m03)   <features>
	I0429 12:49:21.657034  870218 main.go:141] libmachine: (ha-212075-m03)     <acpi/>
	I0429 12:49:21.657039  870218 main.go:141] libmachine: (ha-212075-m03)     <apic/>
	I0429 12:49:21.657044  870218 main.go:141] libmachine: (ha-212075-m03)     <pae/>
	I0429 12:49:21.657051  870218 main.go:141] libmachine: (ha-212075-m03)     
	I0429 12:49:21.657056  870218 main.go:141] libmachine: (ha-212075-m03)   </features>
	I0429 12:49:21.657061  870218 main.go:141] libmachine: (ha-212075-m03)   <cpu mode='host-passthrough'>
	I0429 12:49:21.657067  870218 main.go:141] libmachine: (ha-212075-m03)   
	I0429 12:49:21.657074  870218 main.go:141] libmachine: (ha-212075-m03)   </cpu>
	I0429 12:49:21.657079  870218 main.go:141] libmachine: (ha-212075-m03)   <os>
	I0429 12:49:21.657089  870218 main.go:141] libmachine: (ha-212075-m03)     <type>hvm</type>
	I0429 12:49:21.657129  870218 main.go:141] libmachine: (ha-212075-m03)     <boot dev='cdrom'/>
	I0429 12:49:21.657156  870218 main.go:141] libmachine: (ha-212075-m03)     <boot dev='hd'/>
	I0429 12:49:21.657169  870218 main.go:141] libmachine: (ha-212075-m03)     <bootmenu enable='no'/>
	I0429 12:49:21.657180  870218 main.go:141] libmachine: (ha-212075-m03)   </os>
	I0429 12:49:21.657189  870218 main.go:141] libmachine: (ha-212075-m03)   <devices>
	I0429 12:49:21.657203  870218 main.go:141] libmachine: (ha-212075-m03)     <disk type='file' device='cdrom'>
	I0429 12:49:21.657216  870218 main.go:141] libmachine: (ha-212075-m03)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/boot2docker.iso'/>
	I0429 12:49:21.657227  870218 main.go:141] libmachine: (ha-212075-m03)       <target dev='hdc' bus='scsi'/>
	I0429 12:49:21.657257  870218 main.go:141] libmachine: (ha-212075-m03)       <readonly/>
	I0429 12:49:21.657282  870218 main.go:141] libmachine: (ha-212075-m03)     </disk>
	I0429 12:49:21.657296  870218 main.go:141] libmachine: (ha-212075-m03)     <disk type='file' device='disk'>
	I0429 12:49:21.657306  870218 main.go:141] libmachine: (ha-212075-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:49:21.657320  870218 main.go:141] libmachine: (ha-212075-m03)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/ha-212075-m03.rawdisk'/>
	I0429 12:49:21.657333  870218 main.go:141] libmachine: (ha-212075-m03)       <target dev='hda' bus='virtio'/>
	I0429 12:49:21.657343  870218 main.go:141] libmachine: (ha-212075-m03)     </disk>
	I0429 12:49:21.657358  870218 main.go:141] libmachine: (ha-212075-m03)     <interface type='network'>
	I0429 12:49:21.657370  870218 main.go:141] libmachine: (ha-212075-m03)       <source network='mk-ha-212075'/>
	I0429 12:49:21.657382  870218 main.go:141] libmachine: (ha-212075-m03)       <model type='virtio'/>
	I0429 12:49:21.657395  870218 main.go:141] libmachine: (ha-212075-m03)     </interface>
	I0429 12:49:21.657403  870218 main.go:141] libmachine: (ha-212075-m03)     <interface type='network'>
	I0429 12:49:21.657410  870218 main.go:141] libmachine: (ha-212075-m03)       <source network='default'/>
	I0429 12:49:21.657425  870218 main.go:141] libmachine: (ha-212075-m03)       <model type='virtio'/>
	I0429 12:49:21.657438  870218 main.go:141] libmachine: (ha-212075-m03)     </interface>
	I0429 12:49:21.657446  870218 main.go:141] libmachine: (ha-212075-m03)     <serial type='pty'>
	I0429 12:49:21.657458  870218 main.go:141] libmachine: (ha-212075-m03)       <target port='0'/>
	I0429 12:49:21.657468  870218 main.go:141] libmachine: (ha-212075-m03)     </serial>
	I0429 12:49:21.657477  870218 main.go:141] libmachine: (ha-212075-m03)     <console type='pty'>
	I0429 12:49:21.657492  870218 main.go:141] libmachine: (ha-212075-m03)       <target type='serial' port='0'/>
	I0429 12:49:21.657504  870218 main.go:141] libmachine: (ha-212075-m03)     </console>
	I0429 12:49:21.657514  870218 main.go:141] libmachine: (ha-212075-m03)     <rng model='virtio'>
	I0429 12:49:21.657526  870218 main.go:141] libmachine: (ha-212075-m03)       <backend model='random'>/dev/random</backend>
	I0429 12:49:21.657536  870218 main.go:141] libmachine: (ha-212075-m03)     </rng>
	I0429 12:49:21.657548  870218 main.go:141] libmachine: (ha-212075-m03)     
	I0429 12:49:21.657555  870218 main.go:141] libmachine: (ha-212075-m03)     
	I0429 12:49:21.657562  870218 main.go:141] libmachine: (ha-212075-m03)   </devices>
	I0429 12:49:21.657571  870218 main.go:141] libmachine: (ha-212075-m03) </domain>
	I0429 12:49:21.657592  870218 main.go:141] libmachine: (ha-212075-m03) 
	I0429 12:49:21.666856  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:f8:63:70 in network default
	I0429 12:49:21.667717  870218 main.go:141] libmachine: (ha-212075-m03) Ensuring networks are active...
	I0429 12:49:21.667743  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:21.668658  870218 main.go:141] libmachine: (ha-212075-m03) Ensuring network default is active
	I0429 12:49:21.669085  870218 main.go:141] libmachine: (ha-212075-m03) Ensuring network mk-ha-212075 is active
	I0429 12:49:21.669490  870218 main.go:141] libmachine: (ha-212075-m03) Getting domain xml...
	I0429 12:49:21.670407  870218 main.go:141] libmachine: (ha-212075-m03) Creating domain...
	I0429 12:49:22.960796  870218 main.go:141] libmachine: (ha-212075-m03) Waiting to get IP...
	I0429 12:49:22.961597  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:22.962057  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:22.962099  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:22.962046  871030 retry.go:31] will retry after 275.195421ms: waiting for machine to come up
	I0429 12:49:23.238662  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:23.239199  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:23.239240  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:23.239147  871030 retry.go:31] will retry after 254.361022ms: waiting for machine to come up
	I0429 12:49:23.495797  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:23.496358  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:23.496391  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:23.496305  871030 retry.go:31] will retry after 399.111276ms: waiting for machine to come up
	I0429 12:49:23.897726  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:23.898280  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:23.898315  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:23.898236  871030 retry.go:31] will retry after 423.835443ms: waiting for machine to come up
	I0429 12:49:24.324377  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:24.324945  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:24.324974  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:24.324899  871030 retry.go:31] will retry after 676.971457ms: waiting for machine to come up
	I0429 12:49:25.003929  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:25.004292  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:25.004315  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:25.004262  871030 retry.go:31] will retry after 923.473252ms: waiting for machine to come up
	I0429 12:49:25.928825  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:25.929398  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:25.929436  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:25.929332  871030 retry.go:31] will retry after 855.800309ms: waiting for machine to come up
	I0429 12:49:26.786759  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:26.787218  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:26.787248  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:26.787181  871030 retry.go:31] will retry after 999.873188ms: waiting for machine to come up
	I0429 12:49:27.788564  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:27.789010  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:27.789035  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:27.788950  871030 retry.go:31] will retry after 1.830294576s: waiting for machine to come up
	I0429 12:49:29.622339  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:29.622964  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:29.623001  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:29.622895  871030 retry.go:31] will retry after 2.277621565s: waiting for machine to come up
	I0429 12:49:31.901933  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:31.902475  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:31.902524  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:31.902398  871030 retry.go:31] will retry after 2.203385625s: waiting for machine to come up
	I0429 12:49:34.108550  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:34.108982  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:34.109014  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:34.108936  871030 retry.go:31] will retry after 3.624223076s: waiting for machine to come up
	I0429 12:49:37.735007  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:37.735616  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:37.735646  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:37.735568  871030 retry.go:31] will retry after 4.166668795s: waiting for machine to come up
	I0429 12:49:41.903602  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:41.904123  870218 main.go:141] libmachine: (ha-212075-m03) Found IP for machine: 192.168.39.109
	I0429 12:49:41.904142  870218 main.go:141] libmachine: (ha-212075-m03) Reserving static IP address...
	I0429 12:49:41.904152  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has current primary IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:41.904544  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find host DHCP lease matching {name: "ha-212075-m03", mac: "52:54:00:1c:04:a1", ip: "192.168.39.109"} in network mk-ha-212075
	I0429 12:49:41.999071  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Getting to WaitForSSH function...
	I0429 12:49:41.999113  870218 main.go:141] libmachine: (ha-212075-m03) Reserved static IP address: 192.168.39.109
	I0429 12:49:41.999129  870218 main.go:141] libmachine: (ha-212075-m03) Waiting for SSH to be available...
	I0429 12:49:42.001885  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.002602  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.002632  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.002653  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Using SSH client type: external
	I0429 12:49:42.002666  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa (-rw-------)
	I0429 12:49:42.002696  870218 main.go:141] libmachine: (ha-212075-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:49:42.002710  870218 main.go:141] libmachine: (ha-212075-m03) DBG | About to run SSH command:
	I0429 12:49:42.002748  870218 main.go:141] libmachine: (ha-212075-m03) DBG | exit 0
	I0429 12:49:42.132018  870218 main.go:141] libmachine: (ha-212075-m03) DBG | SSH cmd err, output: <nil>: 
	I0429 12:49:42.132286  870218 main.go:141] libmachine: (ha-212075-m03) KVM machine creation complete!
	I0429 12:49:42.132639  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetConfigRaw
	I0429 12:49:42.133225  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:42.133438  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:42.133643  870218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:49:42.133665  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:49:42.135130  870218 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:49:42.135148  870218 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:49:42.135157  870218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:49:42.135168  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.137902  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.138260  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.138291  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.138462  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.138672  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.138886  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.139066  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.139259  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.139548  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.139562  870218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:49:42.251331  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:49:42.251394  870218 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:49:42.251407  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.255739  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.256325  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.256371  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.256822  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.257086  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.257291  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.257464  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.257701  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.257939  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.257959  870218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:49:42.372811  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:49:42.372885  870218 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:49:42.372892  870218 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:49:42.372902  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:42.373263  870218 buildroot.go:166] provisioning hostname "ha-212075-m03"
	I0429 12:49:42.373296  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:42.373540  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.376574  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.377111  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.377148  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.377277  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.377493  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.377667  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.377828  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.378048  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.378311  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.378330  870218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075-m03 && echo "ha-212075-m03" | sudo tee /etc/hostname
	I0429 12:49:42.504636  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075-m03
	
	I0429 12:49:42.504679  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.507608  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.508004  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.508030  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.508303  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.508548  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.508754  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.508886  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.509117  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.509339  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.509357  870218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:49:42.626792  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:49:42.626829  870218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:49:42.626849  870218 buildroot.go:174] setting up certificates
	I0429 12:49:42.626863  870218 provision.go:84] configureAuth start
	I0429 12:49:42.626876  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:42.627259  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:42.630150  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.630519  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.630552  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.630703  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.633425  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.633770  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.633798  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.633925  870218 provision.go:143] copyHostCerts
	I0429 12:49:42.633964  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:49:42.634010  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:49:42.634023  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:49:42.634119  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:49:42.634237  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:49:42.634263  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:49:42.634273  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:49:42.634318  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:49:42.634403  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:49:42.634426  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:49:42.634434  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:49:42.634467  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:49:42.634540  870218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075-m03 san=[127.0.0.1 192.168.39.109 ha-212075-m03 localhost minikube]
	I0429 12:49:42.737197  870218 provision.go:177] copyRemoteCerts
	I0429 12:49:42.737263  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:49:42.737297  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.740003  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.740382  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.740442  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.740606  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.740806  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.740978  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.741155  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:42.827122  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:49:42.827206  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:49:42.855209  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:49:42.855317  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:49:42.883770  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:49:42.883851  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:49:42.911410  870218 provision.go:87] duration metric: took 284.528347ms to configureAuth
	I0429 12:49:42.911452  870218 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:49:42.911733  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:42.911834  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.914793  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.915175  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.915208  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.915408  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.915653  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.915839  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.915991  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.916165  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.916385  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.916411  870218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:49:43.217344  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:49:43.217385  870218 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:49:43.217396  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetURL
	I0429 12:49:43.219000  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Using libvirt version 6000000
	I0429 12:49:43.221697  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.222061  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.222087  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.222270  870218 main.go:141] libmachine: Docker is up and running!
	I0429 12:49:43.222283  870218 main.go:141] libmachine: Reticulating splines...
	I0429 12:49:43.222291  870218 client.go:171] duration metric: took 21.933879944s to LocalClient.Create
	I0429 12:49:43.222314  870218 start.go:167] duration metric: took 21.933944364s to libmachine.API.Create "ha-212075"
	I0429 12:49:43.222324  870218 start.go:293] postStartSetup for "ha-212075-m03" (driver="kvm2")
	I0429 12:49:43.222335  870218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:49:43.222370  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.222650  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:49:43.222690  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:43.225352  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.225819  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.225855  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.226068  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.226288  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.226485  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.226624  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:43.316706  870218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:49:43.321843  870218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:49:43.321882  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:49:43.321994  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:49:43.322091  870218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:49:43.322104  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:49:43.322368  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:49:43.334078  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:49:43.361992  870218 start.go:296] duration metric: took 139.649645ms for postStartSetup
	I0429 12:49:43.362063  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetConfigRaw
	I0429 12:49:43.362790  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:43.365832  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.366363  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.366399  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.366832  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:49:43.367146  870218 start.go:128] duration metric: took 22.099896004s to createHost
	I0429 12:49:43.367183  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:43.369765  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.370219  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.370248  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.370419  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.370666  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.370874  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.371071  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.371236  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:43.371460  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:43.371478  870218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:49:43.485175  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394983.453178524
	
	I0429 12:49:43.485209  870218 fix.go:216] guest clock: 1714394983.453178524
	I0429 12:49:43.485228  870218 fix.go:229] Guest: 2024-04-29 12:49:43.453178524 +0000 UTC Remote: 2024-04-29 12:49:43.367166051 +0000 UTC m=+152.742319003 (delta=86.012473ms)
	I0429 12:49:43.485253  870218 fix.go:200] guest clock delta is within tolerance: 86.012473ms
	I0429 12:49:43.485260  870218 start.go:83] releasing machines lock for "ha-212075-m03", held for 22.218152522s
	I0429 12:49:43.485292  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.485628  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:43.488595  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.489047  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.489074  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.491763  870218 out.go:177] * Found network options:
	I0429 12:49:43.493406  870218 out.go:177]   - NO_PROXY=192.168.39.97,192.168.39.36
	W0429 12:49:43.494652  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 12:49:43.494677  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:49:43.494698  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.495627  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.495861  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.496005  870218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:49:43.496064  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	W0429 12:49:43.496177  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 12:49:43.496205  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:49:43.496282  870218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:49:43.496308  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:43.499425  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.499745  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.499847  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.499903  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.500060  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.500281  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.500316  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.500332  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.500490  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.500615  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.500728  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.500799  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:43.500895  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.501079  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:43.745296  870218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:49:43.753042  870218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:49:43.753146  870218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:49:43.771662  870218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:49:43.771702  870218 start.go:494] detecting cgroup driver to use...
	I0429 12:49:43.771785  870218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:49:43.789253  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:49:43.805629  870218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:49:43.805716  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:49:43.823411  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:49:43.839119  870218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:49:43.965205  870218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:49:44.144532  870218 docker.go:233] disabling docker service ...
	I0429 12:49:44.144615  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:49:44.161598  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:49:44.176924  870218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:49:44.318518  870218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:49:44.444464  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:49:44.460146  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:49:44.482406  870218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:49:44.482480  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.495415  870218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:49:44.495495  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.507625  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.520065  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.532758  870218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:49:44.545179  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.557185  870218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.577205  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.590527  870218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:49:44.601541  870218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:49:44.601614  870218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:49:44.618752  870218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:49:44.630649  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:49:44.760043  870218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:49:44.908104  870218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:49:44.908203  870218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:49:44.913590  870218 start.go:562] Will wait 60s for crictl version
	I0429 12:49:44.913671  870218 ssh_runner.go:195] Run: which crictl
	I0429 12:49:44.917832  870218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:49:44.967004  870218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:49:44.967123  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:49:45.000292  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:49:45.033598  870218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:49:45.034927  870218 out.go:177]   - env NO_PROXY=192.168.39.97
	I0429 12:49:45.036448  870218 out.go:177]   - env NO_PROXY=192.168.39.97,192.168.39.36
	I0429 12:49:45.037641  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:45.040460  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:45.040872  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:45.040897  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:45.041102  870218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:49:45.045938  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:49:45.060030  870218 mustload.go:65] Loading cluster: ha-212075
	I0429 12:49:45.060296  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:45.060651  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:45.060702  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:45.076464  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0429 12:49:45.076966  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:45.077478  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:45.077508  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:45.077859  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:45.078069  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:49:45.079901  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:49:45.080243  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:45.080285  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:45.096699  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0429 12:49:45.097237  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:45.097836  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:45.097862  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:45.098219  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:45.098405  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:49:45.098548  870218 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.109
	I0429 12:49:45.098562  870218 certs.go:194] generating shared ca certs ...
	I0429 12:49:45.098585  870218 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:49:45.098756  870218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:49:45.098808  870218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:49:45.098823  870218 certs.go:256] generating profile certs ...
	I0429 12:49:45.098924  870218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:49:45.098980  870218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a
	I0429 12:49:45.099003  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.36 192.168.39.109 192.168.39.254]
	I0429 12:49:45.305371  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a ...
	I0429 12:49:45.305425  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a: {Name:mk17ce06665377b1ef8d805c47fa76e8dc7207f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:49:45.305633  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a ...
	I0429 12:49:45.305648  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a: {Name:mk93b7c74bfe26fde2277c8d3d88ed9da0ad319b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:49:45.305724  870218 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:49:45.305871  870218 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:49:45.306065  870218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:49:45.306084  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:49:45.306097  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:49:45.306107  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:49:45.306122  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:49:45.306135  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:49:45.306148  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:49:45.306159  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:49:45.306177  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:49:45.306231  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:49:45.306261  870218 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:49:45.306271  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:49:45.306291  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:49:45.306311  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:49:45.306339  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:49:45.306382  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:49:45.306422  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:49:45.306443  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:49:45.306461  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:45.306505  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:49:45.310065  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:45.310591  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:49:45.310624  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:45.310800  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:49:45.311020  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:49:45.311175  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:49:45.311404  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:49:45.391809  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 12:49:45.398724  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 12:49:45.413103  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 12:49:45.418467  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 12:49:45.432802  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 12:49:45.440754  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 12:49:45.455596  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 12:49:45.461211  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 12:49:45.474952  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 12:49:45.480353  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 12:49:45.494613  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 12:49:45.500103  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 12:49:45.514344  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:49:45.543413  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:49:45.571376  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:49:45.599504  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:49:45.626755  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 12:49:45.654437  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:49:45.683324  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:49:45.711211  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:49:45.741014  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:49:45.771670  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:49:45.799792  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:49:45.827179  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 12:49:45.846410  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 12:49:45.865742  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 12:49:45.884880  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 12:49:45.906579  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 12:49:45.934587  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 12:49:45.954505  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 12:49:45.975997  870218 ssh_runner.go:195] Run: openssl version
	I0429 12:49:45.982992  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:49:45.997527  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:49:46.003152  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:49:46.003236  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:49:46.010012  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:49:46.023112  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:49:46.036215  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:49:46.041157  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:49:46.041267  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:49:46.047620  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:49:46.060305  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:49:46.073926  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:46.079569  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:46.079679  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:46.086524  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:49:46.099843  870218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:49:46.105249  870218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:49:46.105323  870218 kubeadm.go:928] updating node {m03 192.168.39.109 8443 v1.30.0 crio true true} ...
	I0429 12:49:46.105422  870218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:49:46.105448  870218 kube-vip.go:111] generating kube-vip config ...
	I0429 12:49:46.105491  870218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:49:46.128538  870218 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:49:46.128670  870218 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:49:46.128781  870218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:49:46.141116  870218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 12:49:46.141207  870218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 12:49:46.154231  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 12:49:46.154250  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 12:49:46.154278  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:49:46.154231  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 12:49:46.154301  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:49:46.154310  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:49:46.154344  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:49:46.154399  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:49:46.167622  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:49:46.167674  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 12:49:46.183740  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:49:46.183810  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:49:46.183852  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 12:49:46.183861  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:49:46.248362  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:49:46.248422  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 12:49:47.245635  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 12:49:47.257815  870218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 12:49:47.279835  870218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:49:47.302241  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 12:49:47.321197  870218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:49:47.325960  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:49:47.340454  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:49:47.476126  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:49:47.495980  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:49:47.496376  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:47.496420  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:47.513164  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0429 12:49:47.513691  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:47.514330  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:47.514356  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:47.514754  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:47.514985  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:49:47.515150  870218 start.go:316] joinCluster: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:49:47.515340  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 12:49:47.515385  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:49:47.518988  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:47.519544  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:49:47.519582  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:47.519819  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:49:47.520072  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:49:47.520249  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:49:47.520432  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:49:47.723515  870218 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:49:47.723578  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vnojfi.4shmv2la5ipmuekk --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m03 --control-plane --apiserver-advertise-address=192.168.39.109 --apiserver-bind-port=8443"
	I0429 12:50:11.558514  870218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vnojfi.4shmv2la5ipmuekk --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m03 --control-plane --apiserver-advertise-address=192.168.39.109 --apiserver-bind-port=8443": (23.834907661s)
	I0429 12:50:11.558556  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 12:50:12.191163  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-212075-m03 minikube.k8s.io/updated_at=2024_04_29T12_50_12_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=ha-212075 minikube.k8s.io/primary=false
	I0429 12:50:12.332310  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-212075-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 12:50:12.457983  870218 start.go:318] duration metric: took 24.942805115s to joinCluster
	I0429 12:50:12.458080  870218 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:50:12.459647  870218 out.go:177] * Verifying Kubernetes components...
	I0429 12:50:12.458462  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:50:12.460883  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:50:12.786218  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:50:12.827610  870218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:50:12.827947  870218 kapi.go:59] client config for ha-212075: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt", KeyFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key", CAFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 12:50:12.828022  870218 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0429 12:50:12.828333  870218 node_ready.go:35] waiting up to 6m0s for node "ha-212075-m03" to be "Ready" ...
	I0429 12:50:12.828437  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:12.828446  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:12.828457  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:12.828466  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:12.832030  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:13.328972  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:13.329002  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:13.329012  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:13.329016  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:13.333488  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:13.829280  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:13.829310  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:13.829317  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:13.829321  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:13.833464  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:14.328604  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:14.328634  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:14.328647  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:14.328654  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:14.332954  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:14.829170  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:14.829194  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:14.829202  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:14.829206  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:14.833857  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:14.834791  870218 node_ready.go:53] node "ha-212075-m03" has status "Ready":"False"
	I0429 12:50:15.329188  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:15.329215  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:15.329224  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:15.329228  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:15.333208  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:15.829111  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:15.829145  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:15.829156  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:15.829162  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:15.833305  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:16.328929  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:16.328964  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:16.328977  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:16.328983  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:16.332860  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:16.828885  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:16.828914  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:16.828923  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:16.828928  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:16.832858  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:17.328914  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:17.328949  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:17.328960  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:17.328966  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:17.333213  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:17.334349  870218 node_ready.go:53] node "ha-212075-m03" has status "Ready":"False"
	I0429 12:50:17.828574  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:17.828607  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:17.828618  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:17.828622  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:17.833207  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:18.329374  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:18.329402  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.329410  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.329415  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.333316  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.828563  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:18.828591  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.828600  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.828603  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.833296  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:18.833938  870218 node_ready.go:49] node "ha-212075-m03" has status "Ready":"True"
	I0429 12:50:18.833966  870218 node_ready.go:38] duration metric: took 6.005614001s for node "ha-212075-m03" to be "Ready" ...
	I0429 12:50:18.833976  870218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:50:18.834052  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:18.834061  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.834069  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.834076  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.848836  870218 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 12:50:18.858243  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.858371  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2t8g
	I0429 12:50:18.858382  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.858395  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.858405  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.861615  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.862314  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:18.862334  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.862342  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.862347  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.865782  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.866457  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.866479  870218 pod_ready.go:81] duration metric: took 8.200804ms for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.866490  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.866552  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x299s
	I0429 12:50:18.866560  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.866567  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.866572  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.870040  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.870670  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:18.870686  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.870696  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.870702  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.874007  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.874553  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.874575  870218 pod_ready.go:81] duration metric: took 8.079218ms for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.874586  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.874655  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075
	I0429 12:50:18.874665  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.874674  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.874680  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.878665  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.879434  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:18.879460  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.879471  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.879478  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.882752  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.883423  870218 pod_ready.go:92] pod "etcd-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.883447  870218 pod_ready.go:81] duration metric: took 8.854916ms for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.883459  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.883533  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:50:18.883540  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.883548  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.883553  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.886433  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:50:18.887117  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:18.887135  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.887143  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.887147  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.889866  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:50:18.890552  870218 pod_ready.go:92] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.890577  870218 pod_ready.go:81] duration metric: took 7.108063ms for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.890591  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:19.028989  870218 request.go:629] Waited for 138.314405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.029067  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.029072  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.029080  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.029085  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.032924  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.229133  870218 request.go:629] Waited for 195.395298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.229202  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.229207  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.229217  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.229220  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.232853  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.428983  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.429028  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.429039  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.429044  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.432974  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.629442  870218 request.go:629] Waited for 195.37277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.629530  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.629538  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.629547  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.629554  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.633456  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.891393  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.891419  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.891429  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.891433  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.895758  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:20.028865  870218 request.go:629] Waited for 132.326326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.028930  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.028936  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.028944  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.028948  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.032892  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.390899  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:20.390924  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.390932  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.390936  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.394598  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.428843  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.428894  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.428908  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.428916  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.432578  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.891566  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:20.891605  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.891617  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.891624  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.895968  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:20.896842  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.896866  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.896880  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.896885  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.900749  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.901250  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:21.391390  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:21.391422  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.391432  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.391437  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.395486  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:21.396301  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:21.396323  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.396333  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.396337  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.399629  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:21.891709  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:21.891737  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.891746  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.891751  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.895525  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:21.896515  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:21.896534  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.896544  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.896549  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.899591  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:22.391086  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:22.391120  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.391129  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.391134  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.395272  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:22.396260  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:22.396280  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.396292  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.396298  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.399934  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:22.891706  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:22.891741  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.891754  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.891762  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.895857  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:22.896485  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:22.896507  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.896518  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.896524  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.899783  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:23.391764  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:23.391802  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.391813  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.391819  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.396591  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:23.397678  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:23.397697  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.397706  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.397712  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.401022  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:23.401638  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:23.891473  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:23.891518  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.891527  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.891531  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.895998  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:23.897166  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:23.897188  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.897198  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.897203  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.900834  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.391767  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:24.391793  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.391801  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.391806  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.395683  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.396364  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:24.396382  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.396390  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.396394  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.399833  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.890825  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:24.890853  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.890862  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.890866  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.894777  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.895773  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:24.895795  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.895803  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.895807  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.899690  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.391837  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:25.391877  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.391890  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.391897  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.395911  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.396638  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:25.396658  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.396667  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.396672  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.400113  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.890897  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:25.890925  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.890933  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.890937  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.894971  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:25.895752  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:25.895776  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.895786  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.895792  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.899327  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.900011  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:26.391728  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:26.391755  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.391766  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.391771  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.395868  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:26.396629  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:26.396652  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.396659  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.396662  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.399977  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:26.890872  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:26.890906  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.890916  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.890921  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.894973  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:26.895971  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:26.895993  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.896002  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.896005  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.899197  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:27.390790  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:27.390819  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.390828  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.390832  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.395080  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:27.396304  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:27.396327  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.396340  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.396345  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.399685  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:27.891328  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:27.891383  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.891396  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.891408  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.895963  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:27.896759  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:27.896777  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.896785  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.896789  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.900307  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:27.900827  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:28.391176  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:28.391206  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.391214  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.391219  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.395304  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:28.396189  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:28.396216  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.396228  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.396234  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.399510  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.891313  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:28.891343  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.891351  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.891368  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.896027  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:28.897004  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:28.897024  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.897033  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.897037  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.900096  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.900611  870218 pod_ready.go:92] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.900634  870218 pod_ready.go:81] duration metric: took 10.01003378s for pod "etcd-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.900658  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.900736  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075
	I0429 12:50:28.900748  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.900759  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.900772  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.904062  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.905012  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:28.905033  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.905041  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.905046  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.914390  870218 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:50:28.915162  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.915184  870218 pod_ready.go:81] duration metric: took 14.517505ms for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.915206  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.915288  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075-m02
	I0429 12:50:28.915299  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.915310  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.915316  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.918801  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.919480  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:28.919499  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.919511  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.919518  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.925028  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:50:28.925481  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.925502  870218 pod_ready.go:81] duration metric: took 10.2876ms for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.925512  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.925600  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075-m03
	I0429 12:50:28.925609  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.925617  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.925622  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.930192  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:28.930899  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:28.930923  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.930934  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.930939  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.936104  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:50:28.936664  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.936690  870218 pod_ready.go:81] duration metric: took 11.171571ms for pod "kube-apiserver-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.936706  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.936798  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075
	I0429 12:50:28.936813  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.936821  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.936825  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.945481  870218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:50:28.946622  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:28.946652  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.946664  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.946671  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.950691  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.951321  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.951353  870218 pod_ready.go:81] duration metric: took 14.638423ms for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.951391  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.091822  870218 request.go:629] Waited for 140.318624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m02
	I0429 12:50:29.091900  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m02
	I0429 12:50:29.091906  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.091914  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.091922  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.096036  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:29.292235  870218 request.go:629] Waited for 195.433606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:29.292308  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:29.292313  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.292320  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.292327  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.296173  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:29.296828  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:29.296855  870218 pod_ready.go:81] duration metric: took 345.456965ms for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.296868  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.491886  870218 request.go:629] Waited for 194.903497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m03
	I0429 12:50:29.491968  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m03
	I0429 12:50:29.491976  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.491987  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.491995  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.497776  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:50:29.691599  870218 request.go:629] Waited for 192.371637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:29.691712  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:29.691717  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.691726  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.691731  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.695802  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:29.696538  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:29.696564  870218 pod_ready.go:81] duration metric: took 399.690214ms for pod "kube-controller-manager-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.696579  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c27wn" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.892095  870218 request.go:629] Waited for 195.391435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27wn
	I0429 12:50:29.892181  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27wn
	I0429 12:50:29.892188  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.892199  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.892207  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.896134  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:30.092329  870218 request.go:629] Waited for 195.502679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:30.092433  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:30.092441  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.092452  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.092459  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.096200  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:30.096729  870218 pod_ready.go:92] pod "kube-proxy-c27wn" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:30.096753  870218 pod_ready.go:81] duration metric: took 400.166366ms for pod "kube-proxy-c27wn" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.096765  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.291287  870218 request.go:629] Waited for 194.439552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:50:30.291447  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:50:30.291461  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.291474  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.291485  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.297628  870218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:50:30.492144  870218 request.go:629] Waited for 193.412376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:30.492229  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:30.492248  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.492260  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.492268  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.496334  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:30.496860  870218 pod_ready.go:92] pod "kube-proxy-ncdsk" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:30.496885  870218 pod_ready.go:81] duration metric: took 400.112924ms for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.496899  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.691664  870218 request.go:629] Waited for 194.681632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:50:30.692181  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:50:30.692198  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.692210  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.692215  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.696295  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:30.891291  870218 request.go:629] Waited for 194.303719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:30.891444  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:30.891459  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.891470  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.891477  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.895385  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:30.896331  870218 pod_ready.go:92] pod "kube-proxy-sfmhh" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:30.896364  870218 pod_ready.go:81] duration metric: took 399.456713ms for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.896378  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.092314  870218 request.go:629] Waited for 195.838169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:50:31.092382  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:50:31.092387  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.092395  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.092403  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.096642  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:31.291736  870218 request.go:629] Waited for 194.40661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:31.291832  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:31.291839  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.291847  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.291853  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.295531  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:31.296465  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:31.296496  870218 pod_ready.go:81] duration metric: took 400.108799ms for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.296513  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.491925  870218 request.go:629] Waited for 195.318095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:50:31.491999  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:50:31.492008  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.492016  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.492029  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.496031  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:31.692069  870218 request.go:629] Waited for 195.409831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:31.692136  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:31.692141  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.692149  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.692154  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.696368  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:31.697106  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:31.697129  870218 pod_ready.go:81] duration metric: took 400.605212ms for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.697143  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.891352  870218 request.go:629] Waited for 194.092342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m03
	I0429 12:50:31.891478  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m03
	I0429 12:50:31.891491  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.891503  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.891512  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.895522  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:32.091710  870218 request.go:629] Waited for 195.413817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:32.091787  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:32.091792  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.091801  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.091806  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.095880  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:32.096637  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:32.096661  870218 pod_ready.go:81] duration metric: took 399.509578ms for pod "kube-scheduler-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:32.096673  870218 pod_ready.go:38] duration metric: took 13.262684539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:50:32.096690  870218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:50:32.096751  870218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:50:32.117800  870218 api_server.go:72] duration metric: took 19.659670409s to wait for apiserver process to appear ...
	I0429 12:50:32.117840  870218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:50:32.117869  870218 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0429 12:50:32.123543  870218 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0429 12:50:32.123629  870218 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0429 12:50:32.123638  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.123645  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.123653  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.124638  870218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0429 12:50:32.124714  870218 api_server.go:141] control plane version: v1.30.0
	I0429 12:50:32.124735  870218 api_server.go:131] duration metric: took 6.886333ms to wait for apiserver health ...
	I0429 12:50:32.124744  870218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:50:32.292129  870218 request.go:629] Waited for 167.303987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.292211  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.292216  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.292224  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.292230  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.299499  870218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:50:32.306492  870218 system_pods.go:59] 24 kube-system pods found
	I0429 12:50:32.306546  870218 system_pods.go:61] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:50:32.306553  870218 system_pods.go:61] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:50:32.306559  870218 system_pods.go:61] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:50:32.306564  870218 system_pods.go:61] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:50:32.306569  870218 system_pods.go:61] "etcd-ha-212075-m03" [92f8e094-a516-4426-a1c5-5f92d2022603] Running
	I0429 12:50:32.306575  870218 system_pods.go:61] "kindnet-2d8zp" [43b594a8-818d-423a-80f3-ad2b5dc79785] Running
	I0429 12:50:32.306579  870218 system_pods.go:61] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:50:32.306584  870218 system_pods.go:61] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:50:32.306591  870218 system_pods.go:61] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:50:32.306596  870218 system_pods.go:61] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:50:32.306600  870218 system_pods.go:61] "kube-apiserver-ha-212075-m03" [7484f88d-78bb-486c-9bc7-71c2a779083b] Running
	I0429 12:50:32.306605  870218 system_pods.go:61] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:50:32.306611  870218 system_pods.go:61] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:50:32.306620  870218 system_pods.go:61] "kube-controller-manager-ha-212075-m03" [94aae029-f109-447d-8080-4f41c99b4dbb] Running
	I0429 12:50:32.306626  870218 system_pods.go:61] "kube-proxy-c27wn" [c45c40a2-2b5d-495f-862a-9e54d6fd6a69] Running
	I0429 12:50:32.306634  870218 system_pods.go:61] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:50:32.306639  870218 system_pods.go:61] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:50:32.306647  870218 system_pods.go:61] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:50:32.306652  870218 system_pods.go:61] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:50:32.306660  870218 system_pods.go:61] "kube-scheduler-ha-212075-m03" [0029c03a-f2cd-4964-a1f9-71127fc72819] Running
	I0429 12:50:32.306665  870218 system_pods.go:61] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:50:32.306674  870218 system_pods.go:61] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:50:32.306678  870218 system_pods.go:61] "kube-vip-ha-212075-m03" [68d29842-0ac7-4c12-a12c-546a42040bb2] Running
	I0429 12:50:32.306685  870218 system_pods.go:61] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:50:32.306694  870218 system_pods.go:74] duration metric: took 181.939437ms to wait for pod list to return data ...
	I0429 12:50:32.306709  870218 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:50:32.491873  870218 request.go:629] Waited for 185.06023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:50:32.491951  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:50:32.491957  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.491964  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.491970  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.496044  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:32.496184  870218 default_sa.go:45] found service account: "default"
	I0429 12:50:32.496202  870218 default_sa.go:55] duration metric: took 189.48187ms for default service account to be created ...
	I0429 12:50:32.496212  870218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:50:32.692200  870218 request.go:629] Waited for 195.908867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.692277  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.692285  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.692298  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.692309  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.700260  870218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:50:32.707846  870218 system_pods.go:86] 24 kube-system pods found
	I0429 12:50:32.707891  870218 system_pods.go:89] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:50:32.707906  870218 system_pods.go:89] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:50:32.707913  870218 system_pods.go:89] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:50:32.707920  870218 system_pods.go:89] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:50:32.707927  870218 system_pods.go:89] "etcd-ha-212075-m03" [92f8e094-a516-4426-a1c5-5f92d2022603] Running
	I0429 12:50:32.707934  870218 system_pods.go:89] "kindnet-2d8zp" [43b594a8-818d-423a-80f3-ad2b5dc79785] Running
	I0429 12:50:32.707943  870218 system_pods.go:89] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:50:32.707953  870218 system_pods.go:89] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:50:32.707960  870218 system_pods.go:89] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:50:32.707970  870218 system_pods.go:89] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:50:32.707977  870218 system_pods.go:89] "kube-apiserver-ha-212075-m03" [7484f88d-78bb-486c-9bc7-71c2a779083b] Running
	I0429 12:50:32.707984  870218 system_pods.go:89] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:50:32.707997  870218 system_pods.go:89] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:50:32.708007  870218 system_pods.go:89] "kube-controller-manager-ha-212075-m03" [94aae029-f109-447d-8080-4f41c99b4dbb] Running
	I0429 12:50:32.708014  870218 system_pods.go:89] "kube-proxy-c27wn" [c45c40a2-2b5d-495f-862a-9e54d6fd6a69] Running
	I0429 12:50:32.708023  870218 system_pods.go:89] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:50:32.708037  870218 system_pods.go:89] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:50:32.708043  870218 system_pods.go:89] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:50:32.708049  870218 system_pods.go:89] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:50:32.708055  870218 system_pods.go:89] "kube-scheduler-ha-212075-m03" [0029c03a-f2cd-4964-a1f9-71127fc72819] Running
	I0429 12:50:32.708062  870218 system_pods.go:89] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:50:32.708071  870218 system_pods.go:89] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:50:32.708077  870218 system_pods.go:89] "kube-vip-ha-212075-m03" [68d29842-0ac7-4c12-a12c-546a42040bb2] Running
	I0429 12:50:32.708087  870218 system_pods.go:89] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:50:32.708096  870218 system_pods.go:126] duration metric: took 211.875538ms to wait for k8s-apps to be running ...
	I0429 12:50:32.708108  870218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:50:32.708158  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:50:32.729023  870218 system_svc.go:56] duration metric: took 20.905519ms WaitForService to wait for kubelet
	I0429 12:50:32.729060  870218 kubeadm.go:576] duration metric: took 20.270939588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:50:32.729082  870218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:50:32.891459  870218 request.go:629] Waited for 162.285599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0429 12:50:32.891529  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0429 12:50:32.891542  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.891550  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.891556  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.895591  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:32.896692  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:50:32.896723  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:50:32.896738  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:50:32.896743  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:50:32.896748  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:50:32.896752  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:50:32.896757  870218 node_conditions.go:105] duration metric: took 167.669117ms to run NodePressure ...
	I0429 12:50:32.896775  870218 start.go:240] waiting for startup goroutines ...
	I0429 12:50:32.896808  870218 start.go:254] writing updated cluster config ...
	I0429 12:50:32.897223  870218 ssh_runner.go:195] Run: rm -f paused
	I0429 12:50:32.955697  870218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 12:50:32.958387  870218 out.go:177] * Done! kubectl is now configured to use "ha-212075" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.428404012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395244428378484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdf9f5f0-1872-453a-b843-b1a89c6e8a1d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.429025092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=037687d1-52c8-483a-9f32-19811fe3d08a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.429089790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=037687d1-52c8-483a-9f32-19811fe3d08a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.429330687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=037687d1-52c8-483a-9f32-19811fe3d08a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.473939901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfd173d5-5b44-44d9-a7d6-8446249da2e1 name=/runtime.v1.RuntimeService/Version
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.474047188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfd173d5-5b44-44d9-a7d6-8446249da2e1 name=/runtime.v1.RuntimeService/Version
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.475548063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf7916bc-3786-475c-a961-b5f23b1cb128 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.476141506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395244476113056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf7916bc-3786-475c-a961-b5f23b1cb128 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.476942710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea8ef6bc-df90-49c1-b96a-127e84cb4cac name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.477020283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea8ef6bc-df90-49c1-b96a-127e84cb4cac name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.477259287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea8ef6bc-df90-49c1-b96a-127e84cb4cac name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.517733035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35bddc26-5847-439d-9630-8e3946439fbb name=/runtime.v1.RuntimeService/Version
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.517809098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35bddc26-5847-439d-9630-8e3946439fbb name=/runtime.v1.RuntimeService/Version
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.519149178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5819a106-ffa0-4297-b4e6-f607697f3990 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.519597994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395244519573279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5819a106-ffa0-4297-b4e6-f607697f3990 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.520153327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a734fb19-2ab9-4ef8-9a79-5fb51b5161a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.520229355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a734fb19-2ab9-4ef8-9a79-5fb51b5161a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.520515885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a734fb19-2ab9-4ef8-9a79-5fb51b5161a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.561514037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21cc57d4-5cdf-49f1-817d-2207a1190ac2 name=/runtime.v1.RuntimeService/Version
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.561614768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21cc57d4-5cdf-49f1-817d-2207a1190ac2 name=/runtime.v1.RuntimeService/Version
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.563510448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dbb8700-dd6b-4851-8e15-1712a7500fee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.564235290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395244564205931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dbb8700-dd6b-4851-8e15-1712a7500fee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.564924511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd5e2c8d-d324-4f48-89cd-d449d8426998 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.565008451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd5e2c8d-d324-4f48-89cd-d449d8426998 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:54:04 ha-212075 crio[678]: time="2024-04-29 12:54:04.565308146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd5e2c8d-d324-4f48-89cd-d449d8426998 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6079fd69c4d07       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   377fa41dd93a5       busybox-fc5497c4f-rcq9m
	8923eb9969f74       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   e0bec542cd689       coredns-7db6d8ff4d-c2t8g
	a7bedc2be5698       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   cb2c23b3b3b1c       coredns-7db6d8ff4d-x299s
	7101dd3def458       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   77a1ef53b73e0       storage-provisioner
	85ad5484c2e54       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   8f85b8a4ba604       kindnet-vnw75
	ae027e60b2a1e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                0                   84bca27dac841       kube-proxy-ncdsk
	0c3dc33eb6d5d       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   13efcdd103317       kube-vip-ha-212075
	220538e592762       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   258b9f1c2d733       kube-scheduler-ha-212075
	382081d5ba19b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   87588649b2c79       kube-controller-manager-ha-212075
	e9f8269450f85       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   f21fd4330dda0       kube-apiserver-ha-212075
	6ba91c742f08c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   814df27c007a6       etcd-ha-212075
	
	
	==> coredns [8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad] <==
	[INFO] 10.244.1.2:38828 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298968s
	[INFO] 10.244.1.2:58272 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000274219s
	[INFO] 10.244.1.2:41537 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154361s
	[INFO] 10.244.1.2:51430 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167101s
	[INFO] 10.244.0.4:39294 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002366491s
	[INFO] 10.244.0.4:47691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095562s
	[INFO] 10.244.0.4:49991 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146136s
	[INFO] 10.244.0.4:45880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133788s
	[INFO] 10.244.2.2:40297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017628s
	[INFO] 10.244.2.2:44282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001974026s
	[INFO] 10.244.2.2:48058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199321s
	[INFO] 10.244.2.2:50097 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220995s
	[INFO] 10.244.2.2:60877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132114s
	[INFO] 10.244.2.2:38824 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114121s
	[INFO] 10.244.1.2:60691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192262s
	[INFO] 10.244.1.2:51664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123427s
	[INFO] 10.244.1.2:57326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156295s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105493s
	[INFO] 10.244.0.4:39454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248959s
	[INFO] 10.244.2.2:56559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010789s
	[INFO] 10.244.1.2:57860 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144445s
	[INFO] 10.244.1.2:40470 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145332s
	[INFO] 10.244.0.4:35067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124783s
	[INFO] 10.244.2.2:47889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150138s
	[INFO] 10.244.2.2:60310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091159s
	
	
	==> coredns [a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6] <==
	[INFO] 10.244.2.2:49673 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00011362s
	[INFO] 10.244.2.2:52287 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001865115s
	[INFO] 10.244.1.2:45655 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028655691s
	[INFO] 10.244.1.2:34986 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171925s
	[INFO] 10.244.1.2:48145 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003370523s
	[INFO] 10.244.1.2:43604 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158954s
	[INFO] 10.244.0.4:58453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127934s
	[INFO] 10.244.0.4:52484 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001277738s
	[INFO] 10.244.0.4:47770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128102s
	[INFO] 10.244.0.4:53060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103039s
	[INFO] 10.244.2.2:55991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854135s
	[INFO] 10.244.2.2:33533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090157s
	[INFO] 10.244.1.2:52893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090867s
	[INFO] 10.244.0.4:54479 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102901s
	[INFO] 10.244.0.4:53525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000359828s
	[INFO] 10.244.2.2:57755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000264423s
	[INFO] 10.244.2.2:47852 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118616s
	[INFO] 10.244.2.2:38289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112347s
	[INFO] 10.244.1.2:55092 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184788s
	[INFO] 10.244.1.2:52235 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146353s
	[INFO] 10.244.0.4:55598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137209s
	[INFO] 10.244.0.4:54649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121493s
	[INFO] 10.244.0.4:50694 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136791s
	[INFO] 10.244.2.2:49177 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104896s
	[INFO] 10.244.2.2:41037 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088839s
	
	
	==> describe nodes <==
	Name:               ha-212075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_54_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:54:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:48:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-212075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eefe9cc034f74464a919edd5f6b61c2b
	  System UUID:                eefe9cc0-34f7-4464-a919-edd5f6b61c2b
	  Boot ID:                    20b6e47d-4696-4b2a-ba7c-62e73184f5c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rcq9m              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 coredns-7db6d8ff4d-c2t8g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m57s
	  kube-system                 coredns-7db6d8ff4d-x299s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m57s
	  kube-system                 etcd-ha-212075                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m11s
	  kube-system                 kindnet-vnw75                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-apiserver-ha-212075             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-controller-manager-ha-212075    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-proxy-ncdsk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-scheduler-ha-212075             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-vip-ha-212075                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m56s                  kube-proxy       
	  Normal  Starting                 6m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m11s                  kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s                  kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s                  kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m58s                  node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal  NodeReady                5m55s                  kubelet          Node ha-212075 status is now: NodeReady
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal  RegisteredNode           3m37s                  node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	
	
	Name:               ha-212075-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_49_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:48:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:51:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    ha-212075-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 088c5f79339047d6aaf2c88397c97942
	  System UUID:                088c5f79-3390-47d6-aaf2-c88397c97942
	  Boot ID:                    26514912-b71e-458e-b679-e7e1ba2580cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9q8rf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-212075-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	  kube-system                 kindnet-sx2zd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m6s
	  kube-system                 kube-apiserver-ha-212075-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-212075-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-sfmhh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-scheduler-ha-212075-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-vip-ha-212075-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-212075-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           4m47s                node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           3m37s                node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  NodeNotReady             102s                 node-controller  Node ha-212075-m02 status is now: NodeNotReady
	
	
	Name:               ha-212075-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_50_12_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:50:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:54:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-212075-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 535ef7f0e3c949a7801d0ab8f3e70b91
	  System UUID:                535ef7f0-e3c9-49a7-801d-0ab8f3e70b91
	  Boot ID:                    bc34a15b-f8d2-49d7-ac30-f44e734d2ed5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xw452                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-212075-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-2d8zp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-212075-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ha-212075-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-c27wn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-212075-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-vip-ha-212075-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-212075-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	
	
	Name:               ha-212075-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_51_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-212075-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee58aa83584b463285f294fa28d19e05
	  System UUID:                ee58aa83-584b-4632-85f2-94fa28d19e05
	  Boot ID:                    05003e8a-1683-4cb9-9ad4-cb1e00255e69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d6tbw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-bnbr8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x3 over 2m53s)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x3 over 2m53s)  kubelet          Node ha-212075-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x3 over 2m53s)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal  NodeReady                2m44s                  kubelet          Node ha-212075-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053063] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042417] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.595489] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.659216] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.658122] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.473754] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.066120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063959] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.171013] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136787] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.290881] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.567542] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.067175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.833264] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +1.214053] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.340857] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.082766] kauditd_printk_skb: 40 callbacks suppressed
	[Apr29 12:48] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.082460] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab] <==
	{"level":"warn","ts":"2024-04-29T12:54:04.871229Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.881378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.887524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.906744Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.925334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.93828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.945191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.950366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.954881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.964282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.972269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.980524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.987198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:04.993295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.004632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.01196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.01999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.025747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.030742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.037415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.044091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.046923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.047971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.062775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:54:05.064447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:54:05 up 6 min,  0 users,  load average: 0.26, 0.24, 0.12
	Linux ha-212075 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df] <==
	I0429 12:53:29.576577       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 12:53:39.583531       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 12:53:39.583869       1 main.go:227] handling current node
	I0429 12:53:39.583934       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 12:53:39.583960       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 12:53:39.584097       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 12:53:39.584117       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 12:53:39.584174       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 12:53:39.584193       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 12:53:49.598580       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 12:53:49.598778       1 main.go:227] handling current node
	I0429 12:53:49.598817       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 12:53:49.598840       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 12:53:49.599056       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 12:53:49.599087       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 12:53:49.599153       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 12:53:49.599172       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 12:53:59.611389       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 12:53:59.611435       1 main.go:227] handling current node
	I0429 12:53:59.611446       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 12:53:59.611452       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 12:53:59.611582       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 12:53:59.611607       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 12:53:59.611702       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 12:53:59.611726       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16] <==
	I0429 12:47:52.206856       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 12:47:52.212802       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 12:47:53.195190       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 12:47:53.272240       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 12:47:53.319588       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 12:47:53.336489       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 12:48:06.889872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:48:06.889872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:48:07.345600       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0429 12:50:37.199894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50238: use of closed network connection
	E0429 12:50:37.422099       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50256: use of closed network connection
	E0429 12:50:37.642894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50274: use of closed network connection
	E0429 12:50:37.900488       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50288: use of closed network connection
	E0429 12:50:38.116582       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50304: use of closed network connection
	E0429 12:50:38.344331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50320: use of closed network connection
	E0429 12:50:38.560181       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50328: use of closed network connection
	E0429 12:50:38.773132       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50354: use of closed network connection
	E0429 12:50:39.005884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50368: use of closed network connection
	E0429 12:50:39.389303       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50394: use of closed network connection
	E0429 12:50:39.600752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50404: use of closed network connection
	E0429 12:50:39.815611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50424: use of closed network connection
	E0429 12:50:40.035011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50440: use of closed network connection
	E0429 12:50:40.245499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50462: use of closed network connection
	E0429 12:50:40.457578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50492: use of closed network connection
	W0429 12:51:52.205330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.97]
	
	
	==> kube-controller-manager [382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d] <==
	I0429 12:49:01.445000       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-212075-m02"
	I0429 12:50:08.617501       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-212075-m03\" does not exist"
	I0429 12:50:08.663917       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-212075-m03" podCIDRs=["10.244.2.0/24"]
	I0429 12:50:11.474203       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-212075-m03"
	I0429 12:50:34.047890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.017808ms"
	I0429 12:50:34.146964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.922663ms"
	I0429 12:50:34.376985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="229.814363ms"
	E0429 12:50:34.377038       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 12:50:34.377201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.971µs"
	I0429 12:50:34.384897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.618µs"
	I0429 12:50:35.869494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.517378ms"
	I0429 12:50:35.869952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.254µs"
	I0429 12:50:36.030792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.135927ms"
	I0429 12:50:36.030945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.996µs"
	I0429 12:50:36.451336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.394422ms"
	I0429 12:50:36.451462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.817µs"
	I0429 12:51:12.921789       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-212075-m04\" does not exist"
	I0429 12:51:12.964704       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-212075-m04" podCIDRs=["10.244.3.0/24"]
	I0429 12:51:16.503129       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-212075-m04"
	I0429 12:51:21.706750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-212075-m04"
	I0429 12:52:22.540869       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-212075-m04"
	I0429 12:52:22.611915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.169574ms"
	I0429 12:52:22.612104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.399µs"
	I0429 12:52:22.663107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.847549ms"
	I0429 12:52:22.663624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.292µs"
	
	
	==> kube-proxy [ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d] <==
	I0429 12:48:07.949311       1 server_linux.go:69] "Using iptables proxy"
	I0429 12:48:07.970912       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0429 12:48:08.111869       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:48:08.111971       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:48:08.112000       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:48:08.119057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:48:08.119347       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:48:08.119376       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:48:08.121414       1 config.go:192] "Starting service config controller"
	I0429 12:48:08.121424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:48:08.121444       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:48:08.121447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:48:08.121966       1 config.go:319] "Starting node config controller"
	I0429 12:48:08.121973       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:48:08.222195       1 shared_informer.go:320] Caches are synced for node config
	I0429 12:48:08.222222       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:48:08.222241       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf] <==
	W0429 12:47:51.705755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:47:51.705803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:47:51.705952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:47:51.705995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:47:51.762409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:47:51.762955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:47:51.831035       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:47:51.831222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0429 12:47:54.028994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 12:50:33.990786       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xw452\": pod busybox-fc5497c4f-xw452 is already assigned to node \"ha-212075-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xw452" node="ha-212075-m03"
	E0429 12:50:33.992233       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f23383ad-d9ed-46ed-9327-d850179b2822(default/busybox-fc5497c4f-xw452) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-xw452"
	E0429 12:50:33.996630       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xw452\": pod busybox-fc5497c4f-xw452 is already assigned to node \"ha-212075-m03\"" pod="default/busybox-fc5497c4f-xw452"
	I0429 12:50:33.997434       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-xw452" node="ha-212075-m03"
	E0429 12:51:13.049010       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d6tbw\": pod kindnet-d6tbw is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-d6tbw" node="ha-212075-m04"
	E0429 12:51:13.049114       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7effb27d-adcf-42ce-9c98-d1cb8db7fd04(kube-system/kindnet-d6tbw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d6tbw"
	E0429 12:51:13.049139       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d6tbw\": pod kindnet-d6tbw is already assigned to node \"ha-212075-m04\"" pod="kube-system/kindnet-d6tbw"
	I0429 12:51:13.049158       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d6tbw" node="ha-212075-m04"
	E0429 12:51:13.049476       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bnbr8\": pod kube-proxy-bnbr8 is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bnbr8" node="ha-212075-m04"
	E0429 12:51:13.049643       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 16945b1d-2d33-4a95-b9ad-03d0665b74e8(kube-system/kube-proxy-bnbr8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bnbr8"
	E0429 12:51:13.049799       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bnbr8\": pod kube-proxy-bnbr8 is already assigned to node \"ha-212075-m04\"" pod="kube-system/kube-proxy-bnbr8"
	I0429 12:51:13.049931       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bnbr8" node="ha-212075-m04"
	E0429 12:51:13.198840       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9qm85\": pod kindnet-9qm85 is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9qm85" node="ha-212075-m04"
	E0429 12:51:13.199120       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f060a9cf-2fcb-4ef4-8991-954beeaa1614(kube-system/kindnet-9qm85) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9qm85"
	E0429 12:51:13.199784       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9qm85\": pod kindnet-9qm85 is already assigned to node \"ha-212075-m04\"" pod="kube-system/kindnet-9qm85"
	I0429 12:51:13.199829       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9qm85" node="ha-212075-m04"
	
	
	==> kubelet <==
	Apr 29 12:49:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:49:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:50:34 ha-212075 kubelet[1362]: I0429 12:50:34.030270    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=146.030038179 podStartE2EDuration="2m26.030038179s" podCreationTimestamp="2024-04-29 12:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 12:48:11.438371254 +0000 UTC m=+18.425865530" watchObservedRunningTime="2024-04-29 12:50:34.030038179 +0000 UTC m=+161.017532456"
	Apr 29 12:50:34 ha-212075 kubelet[1362]: I0429 12:50:34.032309    1362 topology_manager.go:215] "Topology Admit Handler" podUID="de803f70-5f57-4282-af1e-47845231d712" podNamespace="default" podName="busybox-fc5497c4f-rcq9m"
	Apr 29 12:50:34 ha-212075 kubelet[1362]: I0429 12:50:34.077142    1362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjh9c\" (UniqueName: \"kubernetes.io/projected/de803f70-5f57-4282-af1e-47845231d712-kube-api-access-xjh9c\") pod \"busybox-fc5497c4f-rcq9m\" (UID: \"de803f70-5f57-4282-af1e-47845231d712\") " pod="default/busybox-fc5497c4f-rcq9m"
	Apr 29 12:50:53 ha-212075 kubelet[1362]: E0429 12:50:53.153020    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:50:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:50:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:50:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:50:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:51:53 ha-212075 kubelet[1362]: E0429 12:51:53.156281    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:51:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:51:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:51:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:51:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:52:53 ha-212075 kubelet[1362]: E0429 12:52:53.152345    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:52:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:52:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:52:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:52:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:53:53 ha-212075 kubelet[1362]: E0429 12:53:53.152414    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:53:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:53:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:53:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:53:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-212075 -n ha-212075
helpers_test.go:261: (dbg) Run:  kubectl --context ha-212075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (3.194694524s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:09.837126  875022 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:09.837731  875022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:09.837785  875022 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:09.837803  875022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:09.838286  875022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:09.838726  875022 out.go:298] Setting JSON to false
	I0429 12:54:09.838792  875022 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:09.838911  875022 notify.go:220] Checking for updates...
	I0429 12:54:09.839589  875022 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:09.839613  875022 status.go:255] checking status of ha-212075 ...
	I0429 12:54:09.840029  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:09.840077  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:09.859968  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0429 12:54:09.860471  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:09.861296  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:09.861332  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:09.861801  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:09.862057  875022 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:09.863915  875022 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:09.863946  875022 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:09.864291  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:09.864346  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:09.880685  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0429 12:54:09.881255  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:09.881815  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:09.881844  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:09.882243  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:09.882474  875022 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:09.886069  875022 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:09.886543  875022 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:09.886583  875022 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:09.886730  875022 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:09.887060  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:09.887117  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:09.904213  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0429 12:54:09.904764  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:09.905319  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:09.905336  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:09.905646  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:09.905952  875022 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:09.906332  875022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:09.906372  875022 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:09.909700  875022 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:09.910206  875022 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:09.910258  875022 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:09.910459  875022 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:09.910683  875022 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:09.910889  875022 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:09.911107  875022 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:10.000062  875022 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:10.007293  875022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:10.026731  875022 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:10.026765  875022 api_server.go:166] Checking apiserver status ...
	I0429 12:54:10.026809  875022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:10.043704  875022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:10.055384  875022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:10.055450  875022 ssh_runner.go:195] Run: ls
	I0429 12:54:10.060508  875022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:10.067255  875022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:10.067291  875022 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:10.067303  875022 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:10.067325  875022 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:10.067695  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:10.067742  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:10.083811  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I0429 12:54:10.084387  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:10.084921  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:10.084944  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:10.085303  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:10.085524  875022 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:10.087200  875022 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:54:10.087220  875022 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:10.087573  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:10.087618  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:10.103397  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0429 12:54:10.103918  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:10.104458  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:10.104486  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:10.104786  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:10.105102  875022 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:54:10.108063  875022 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:10.108523  875022 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:10.108548  875022 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:10.108758  875022 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:10.109078  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:10.109119  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:10.124897  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0429 12:54:10.125364  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:10.125878  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:10.125919  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:10.126326  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:10.126550  875022 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:54:10.126755  875022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:10.126782  875022 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:54:10.129515  875022 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:10.130077  875022 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:10.130111  875022 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:10.130241  875022 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:54:10.130440  875022 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:54:10.130620  875022 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:54:10.130799  875022 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:12.591721  875022 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:12.591834  875022 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:12.591860  875022 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:12.591869  875022 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:12.591899  875022 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:12.591906  875022 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:12.592233  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:12.592284  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:12.610773  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0429 12:54:12.611382  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:12.612009  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:12.612042  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:12.612367  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:12.612540  875022 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:12.614221  875022 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:12.614269  875022 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:12.614610  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:12.614662  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:12.630879  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0429 12:54:12.631422  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:12.631997  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:12.632021  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:12.632360  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:12.632604  875022 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:12.635687  875022 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:12.636093  875022 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:12.636124  875022 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:12.636276  875022 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:12.636647  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:12.636717  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:12.652895  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0429 12:54:12.653392  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:12.653978  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:12.654006  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:12.654364  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:12.654577  875022 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:12.654782  875022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:12.654807  875022 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:12.657887  875022 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:12.658371  875022 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:12.658411  875022 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:12.658720  875022 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:12.658934  875022 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:12.659082  875022 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:12.659209  875022 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:12.743803  875022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:12.761206  875022 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:12.761266  875022 api_server.go:166] Checking apiserver status ...
	I0429 12:54:12.761312  875022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:12.777926  875022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:12.790292  875022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:12.790366  875022 ssh_runner.go:195] Run: ls
	I0429 12:54:12.795857  875022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:12.800529  875022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:12.800564  875022 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:12.800573  875022 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:12.800590  875022 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:12.800927  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:12.801014  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:12.820508  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I0429 12:54:12.820988  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:12.821513  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:12.821551  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:12.821927  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:12.822180  875022 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:12.824245  875022 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:12.824266  875022 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:12.824564  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:12.824604  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:12.840709  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I0429 12:54:12.841175  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:12.841677  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:12.841700  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:12.842032  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:12.842247  875022 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:12.845291  875022 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:12.845725  875022 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:12.845761  875022 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:12.846017  875022 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:12.846325  875022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:12.846366  875022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:12.862258  875022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0429 12:54:12.862746  875022 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:12.863291  875022 main.go:141] libmachine: Using API Version  1
	I0429 12:54:12.863322  875022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:12.863667  875022 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:12.863878  875022 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:12.864090  875022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:12.864117  875022 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:12.867121  875022 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:12.867564  875022 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:12.867609  875022 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:12.867841  875022 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:12.868039  875022 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:12.868210  875022 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:12.868360  875022 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:12.951637  875022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:12.970306  875022 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (5.187018884s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:13.987275  875122 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:13.987602  875122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:13.987613  875122 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:13.987618  875122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:13.987850  875122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:13.988075  875122 out.go:298] Setting JSON to false
	I0429 12:54:13.988108  875122 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:13.988233  875122 notify.go:220] Checking for updates...
	I0429 12:54:13.988723  875122 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:13.988750  875122 status.go:255] checking status of ha-212075 ...
	I0429 12:54:13.989255  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:13.989322  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:14.010328  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0429 12:54:14.010860  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:14.011626  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:14.011657  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:14.012220  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:14.012542  875122 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:14.014493  875122 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:14.014518  875122 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:14.015002  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:14.015066  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:14.031887  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0429 12:54:14.032447  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:14.033013  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:14.033047  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:14.033351  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:14.033536  875122 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:14.037079  875122 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:14.037646  875122 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:14.037685  875122 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:14.037824  875122 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:14.038146  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:14.038186  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:14.054130  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
	I0429 12:54:14.054579  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:14.055097  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:14.055120  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:14.055494  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:14.055699  875122 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:14.055899  875122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:14.055924  875122 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:14.059293  875122 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:14.059793  875122 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:14.059825  875122 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:14.060038  875122 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:14.060302  875122 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:14.060482  875122 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:14.060641  875122 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:14.151768  875122 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:14.159112  875122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:14.175597  875122 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:14.175633  875122 api_server.go:166] Checking apiserver status ...
	I0429 12:54:14.175673  875122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:14.194972  875122 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:14.208584  875122 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:14.208655  875122 ssh_runner.go:195] Run: ls
	I0429 12:54:14.213908  875122 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:14.218637  875122 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:14.218668  875122 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:14.218680  875122 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:14.218699  875122 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:14.219015  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:14.219057  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:14.236477  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0429 12:54:14.237042  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:14.237544  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:14.237567  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:14.237942  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:14.238149  875122 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:14.239936  875122 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:54:14.239968  875122 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:14.240321  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:14.240381  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:14.257479  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37863
	I0429 12:54:14.257975  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:14.258524  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:14.258548  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:14.258871  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:14.259121  875122 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:54:14.262217  875122 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:14.262579  875122 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:14.262620  875122 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:14.262849  875122 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:14.263151  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:14.263195  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:14.279352  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0429 12:54:14.279828  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:14.280386  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:14.280417  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:14.280793  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:14.281412  875122 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:54:14.281629  875122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:14.281653  875122 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:54:14.284362  875122 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:14.284847  875122 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:14.284879  875122 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:14.285058  875122 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:54:14.285284  875122 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:54:14.285458  875122 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:54:14.285638  875122 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:15.663761  875122 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:15.663853  875122 retry.go:31] will retry after 146.632531ms: dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:18.735728  875122 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:18.735847  875122 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:18.735874  875122 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:18.735886  875122 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:18.735938  875122 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:18.735951  875122 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:18.736297  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:18.736356  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:18.753821  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0429 12:54:18.754318  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:18.754912  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:18.754948  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:18.755320  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:18.755541  875122 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:18.757197  875122 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:18.757217  875122 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:18.757595  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:18.757649  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:18.773450  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0429 12:54:18.773980  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:18.774609  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:18.774637  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:18.774964  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:18.775174  875122 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:18.778147  875122 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:18.778562  875122 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:18.778592  875122 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:18.778765  875122 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:18.779160  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:18.779221  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:18.795536  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0429 12:54:18.796018  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:18.796662  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:18.796693  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:18.797004  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:18.797212  875122 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:18.797428  875122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:18.797457  875122 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:18.800377  875122 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:18.800840  875122 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:18.800871  875122 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:18.801056  875122 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:18.801209  875122 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:18.801318  875122 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:18.801496  875122 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:18.888658  875122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:18.906504  875122 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:18.906549  875122 api_server.go:166] Checking apiserver status ...
	I0429 12:54:18.906595  875122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:18.922931  875122 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:18.933617  875122 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:18.933713  875122 ssh_runner.go:195] Run: ls
	I0429 12:54:18.939085  875122 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:18.944252  875122 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:18.944289  875122 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:18.944300  875122 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:18.944318  875122 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:18.944750  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:18.944804  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:18.961430  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0429 12:54:18.961928  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:18.962495  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:18.962531  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:18.962914  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:18.963095  875122 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:18.964815  875122 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:18.964845  875122 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:18.965169  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:18.965253  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:18.982164  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0429 12:54:18.982588  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:18.983129  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:18.983153  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:18.983487  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:18.983688  875122 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:18.986493  875122 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:18.986909  875122 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:18.986932  875122 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:18.987118  875122 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:18.987567  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:18.987672  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:19.005843  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0429 12:54:19.006415  875122 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:19.006998  875122 main.go:141] libmachine: Using API Version  1
	I0429 12:54:19.007039  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:19.007413  875122 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:19.007631  875122 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:19.007938  875122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:19.007970  875122 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:19.010885  875122 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:19.011334  875122 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:19.011402  875122 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:19.011579  875122 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:19.011794  875122 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:19.011991  875122 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:19.012146  875122 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:19.091904  875122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:19.109106  875122 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (4.700362779s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:20.851281  875224 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:20.851570  875224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:20.851579  875224 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:20.851583  875224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:20.851790  875224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:20.852008  875224 out.go:298] Setting JSON to false
	I0429 12:54:20.852039  875224 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:20.852110  875224 notify.go:220] Checking for updates...
	I0429 12:54:20.852457  875224 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:20.852473  875224 status.go:255] checking status of ha-212075 ...
	I0429 12:54:20.852863  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:20.852921  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:20.871472  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0429 12:54:20.871986  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:20.872754  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:20.872788  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:20.873157  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:20.873385  875224 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:20.875216  875224 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:20.875235  875224 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:20.875565  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:20.875607  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:20.892911  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0429 12:54:20.893369  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:20.893963  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:20.893999  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:20.894353  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:20.894589  875224 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:20.897878  875224 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:20.898404  875224 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:20.898431  875224 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:20.898611  875224 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:20.898918  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:20.898979  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:20.915548  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0429 12:54:20.916201  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:20.916767  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:20.916790  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:20.917262  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:20.917542  875224 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:20.917782  875224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:20.917816  875224 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:20.920760  875224 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:20.921169  875224 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:20.921200  875224 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:20.921384  875224 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:20.921603  875224 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:20.921781  875224 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:20.921941  875224 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:21.016143  875224 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:21.022943  875224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:21.039242  875224 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:21.039275  875224 api_server.go:166] Checking apiserver status ...
	I0429 12:54:21.039314  875224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:21.057527  875224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:21.069057  875224 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:21.069120  875224 ssh_runner.go:195] Run: ls
	I0429 12:54:21.075516  875224 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:21.080184  875224 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:21.080217  875224 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:21.080233  875224 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:21.080257  875224 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:21.080559  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:21.080586  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:21.097570  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0429 12:54:21.098135  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:21.098626  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:21.098650  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:21.098968  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:21.099180  875224 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:21.100776  875224 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:54:21.100796  875224 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:21.101086  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:21.101116  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:21.118200  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0429 12:54:21.118724  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:21.119239  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:21.119266  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:21.119615  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:21.119779  875224 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:54:21.122572  875224 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:21.123071  875224 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:21.123098  875224 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:21.123260  875224 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:21.123612  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:21.123663  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:21.139895  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0429 12:54:21.140408  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:21.140886  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:21.140911  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:21.141262  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:21.141466  875224 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:54:21.141712  875224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:21.141737  875224 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:54:21.144681  875224 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:21.145123  875224 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:21.145148  875224 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:21.145427  875224 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:54:21.145658  875224 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:54:21.145838  875224 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:54:21.146015  875224 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:21.807720  875224 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:21.807778  875224 retry.go:31] will retry after 245.704374ms: dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:25.103632  875224 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:25.103783  875224 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:25.103808  875224 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:25.103816  875224 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:25.103851  875224 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:25.103866  875224 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:25.104211  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:25.104264  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:25.120461  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0429 12:54:25.121009  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:25.121541  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:25.121560  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:25.121999  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:25.122272  875224 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:25.124035  875224 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:25.124056  875224 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:25.124358  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:25.124409  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:25.141551  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0429 12:54:25.142059  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:25.142544  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:25.142569  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:25.142904  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:25.143135  875224 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:25.146350  875224 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:25.146788  875224 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:25.146824  875224 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:25.147041  875224 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:25.147423  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:25.147480  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:25.164746  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0429 12:54:25.165229  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:25.165842  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:25.165874  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:25.166245  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:25.166455  875224 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:25.166686  875224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:25.166719  875224 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:25.170023  875224 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:25.170480  875224 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:25.170510  875224 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:25.170671  875224 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:25.170891  875224 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:25.171070  875224 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:25.171258  875224 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:25.256994  875224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:25.275228  875224 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:25.275267  875224 api_server.go:166] Checking apiserver status ...
	I0429 12:54:25.275318  875224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:25.292094  875224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:25.305778  875224 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:25.305853  875224 ssh_runner.go:195] Run: ls
	I0429 12:54:25.312961  875224 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:25.319208  875224 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:25.319246  875224 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:25.319256  875224 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:25.319275  875224 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:25.319702  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:25.319758  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:25.336883  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I0429 12:54:25.337409  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:25.337927  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:25.337954  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:25.338326  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:25.338537  875224 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:25.340475  875224 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:25.340505  875224 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:25.340869  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:25.340925  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:25.357251  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I0429 12:54:25.357815  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:25.358465  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:25.358496  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:25.358850  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:25.359070  875224 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:25.362427  875224 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:25.362999  875224 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:25.363031  875224 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:25.363275  875224 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:25.363623  875224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:25.363654  875224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:25.381059  875224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40569
	I0429 12:54:25.381536  875224 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:25.382184  875224 main.go:141] libmachine: Using API Version  1
	I0429 12:54:25.382210  875224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:25.382589  875224 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:25.382851  875224 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:25.383106  875224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:25.383137  875224 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:25.386525  875224 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:25.386945  875224 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:25.386978  875224 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:25.387186  875224 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:25.387436  875224 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:25.387617  875224 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:25.387762  875224 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:25.469160  875224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:25.487402  875224 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (3.794844459s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:28.090132  875340 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:28.090436  875340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:28.090447  875340 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:28.090452  875340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:28.090676  875340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:28.090879  875340 out.go:298] Setting JSON to false
	I0429 12:54:28.090910  875340 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:28.090974  875340 notify.go:220] Checking for updates...
	I0429 12:54:28.092209  875340 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:28.092258  875340 status.go:255] checking status of ha-212075 ...
	I0429 12:54:28.093185  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:28.093261  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:28.109723  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0429 12:54:28.110344  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:28.111071  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:28.111098  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:28.111619  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:28.111927  875340 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:28.113867  875340 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:28.113889  875340 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:28.114303  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:28.114357  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:28.130488  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0429 12:54:28.131037  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:28.131677  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:28.131716  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:28.132044  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:28.132227  875340 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:28.135415  875340 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:28.135774  875340 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:28.135803  875340 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:28.135964  875340 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:28.136298  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:28.136340  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:28.153076  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0429 12:54:28.153572  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:28.154132  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:28.154164  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:28.154534  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:28.154756  875340 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:28.155000  875340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:28.155046  875340 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:28.158001  875340 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:28.158456  875340 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:28.158489  875340 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:28.158682  875340 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:28.158873  875340 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:28.159044  875340 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:28.159169  875340 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:28.243953  875340 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:28.250994  875340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:28.267971  875340 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:28.268007  875340 api_server.go:166] Checking apiserver status ...
	I0429 12:54:28.268046  875340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:28.284396  875340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:28.295725  875340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:28.295788  875340 ssh_runner.go:195] Run: ls
	I0429 12:54:28.301527  875340 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:28.307949  875340 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:28.307987  875340 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:28.308001  875340 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:28.308024  875340 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:28.308341  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:28.308398  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:28.325622  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I0429 12:54:28.326127  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:28.326705  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:28.326751  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:28.327175  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:28.327491  875340 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:28.329574  875340 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:54:28.329598  875340 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:28.329895  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:28.329935  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:28.346363  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43685
	I0429 12:54:28.346918  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:28.347494  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:28.347522  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:28.347881  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:28.348144  875340 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:54:28.351291  875340 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:28.351821  875340 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:28.351847  875340 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:28.352040  875340 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:28.352406  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:28.352447  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:28.368442  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0429 12:54:28.369016  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:28.369631  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:28.369659  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:28.369988  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:28.370184  875340 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:54:28.370378  875340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:28.370401  875340 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:54:28.373701  875340 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:28.374155  875340 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:28.374185  875340 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:28.374350  875340 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:54:28.374525  875340 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:54:28.374681  875340 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:54:28.374824  875340 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:31.439668  875340 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:31.439778  875340 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:31.439802  875340 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:31.439818  875340 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:31.439844  875340 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:31.439852  875340 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:31.440361  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:31.440426  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:31.457090  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I0429 12:54:31.457592  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:31.458098  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:31.458122  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:31.458468  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:31.458664  875340 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:31.460390  875340 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:31.460417  875340 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:31.460824  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:31.460880  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:31.477571  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37137
	I0429 12:54:31.478022  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:31.478588  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:31.478630  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:31.478980  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:31.479216  875340 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:31.482469  875340 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:31.482987  875340 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:31.483019  875340 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:31.483152  875340 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:31.483502  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:31.483543  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:31.499224  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40271
	I0429 12:54:31.499712  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:31.500305  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:31.500330  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:31.500736  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:31.500996  875340 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:31.501226  875340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:31.501253  875340 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:31.504364  875340 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:31.504873  875340 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:31.504918  875340 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:31.505124  875340 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:31.505365  875340 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:31.505554  875340 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:31.505727  875340 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:31.588109  875340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:31.605381  875340 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:31.605417  875340 api_server.go:166] Checking apiserver status ...
	I0429 12:54:31.605460  875340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:31.621799  875340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:31.633311  875340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:31.633417  875340 ssh_runner.go:195] Run: ls
	I0429 12:54:31.639617  875340 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:31.647105  875340 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:31.647138  875340 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:31.647148  875340 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:31.647166  875340 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:31.647545  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:31.647595  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:31.666121  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0429 12:54:31.666639  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:31.667190  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:31.667216  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:31.667613  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:31.667822  875340 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:31.669608  875340 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:31.669632  875340 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:31.670095  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:31.670142  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:31.689415  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0429 12:54:31.689981  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:31.690542  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:31.690567  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:31.690925  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:31.691195  875340 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:31.694587  875340 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:31.695143  875340 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:31.695187  875340 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:31.695458  875340 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:31.695778  875340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:31.695827  875340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:31.711802  875340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0429 12:54:31.712254  875340 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:31.712796  875340 main.go:141] libmachine: Using API Version  1
	I0429 12:54:31.712820  875340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:31.713170  875340 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:31.713366  875340 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:31.713571  875340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:31.713592  875340 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:31.717472  875340 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:31.718050  875340 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:31.718100  875340 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:31.718333  875340 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:31.718592  875340 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:31.718781  875340 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:31.718957  875340 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:31.804133  875340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:31.821225  875340 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (3.79639692s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:35.030101  875440 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:35.030749  875440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:35.030773  875440 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:35.030781  875440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:35.031273  875440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:35.031704  875440 out.go:298] Setting JSON to false
	I0429 12:54:35.031754  875440 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:35.031817  875440 notify.go:220] Checking for updates...
	I0429 12:54:35.032523  875440 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:35.032546  875440 status.go:255] checking status of ha-212075 ...
	I0429 12:54:35.032954  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:35.033019  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:35.053630  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0429 12:54:35.054248  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:35.054952  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:35.054989  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:35.055455  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:35.055693  875440 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:35.057408  875440 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:35.057431  875440 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:35.057867  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:35.057920  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:35.076059  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0429 12:54:35.076610  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:35.077342  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:35.077391  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:35.077807  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:35.078160  875440 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:35.081183  875440 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:35.081763  875440 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:35.081795  875440 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:35.082134  875440 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:35.082497  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:35.082542  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:35.100276  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0429 12:54:35.100728  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:35.101239  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:35.101272  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:35.101593  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:35.101830  875440 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:35.102025  875440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:35.102051  875440 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:35.105080  875440 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:35.105544  875440 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:35.105575  875440 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:35.105790  875440 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:35.106043  875440 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:35.106180  875440 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:35.106312  875440 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:35.196912  875440 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:35.204207  875440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:35.222492  875440 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:35.222533  875440 api_server.go:166] Checking apiserver status ...
	I0429 12:54:35.222574  875440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:35.238878  875440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:35.250977  875440 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:35.251043  875440 ssh_runner.go:195] Run: ls
	I0429 12:54:35.256230  875440 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:35.266235  875440 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:35.266273  875440 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:35.266291  875440 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:35.266316  875440 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:35.266764  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:35.266816  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:35.282807  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I0429 12:54:35.283308  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:35.283845  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:35.283869  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:35.284171  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:35.284366  875440 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:35.286132  875440 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:54:35.286155  875440 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:35.286470  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:35.286528  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:35.303956  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0429 12:54:35.304495  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:35.305092  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:35.305128  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:35.305532  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:35.305760  875440 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:54:35.308962  875440 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:35.309614  875440 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:35.309644  875440 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:35.309838  875440 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:35.310162  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:35.310210  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:35.326737  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0429 12:54:35.327281  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:35.327825  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:35.327848  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:35.328172  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:35.328370  875440 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:54:35.328558  875440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:35.328581  875440 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:54:35.331502  875440 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:35.332114  875440 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:35.332167  875440 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:35.332232  875440 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:54:35.332438  875440 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:54:35.332615  875440 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:54:35.332760  875440 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:38.383642  875440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:38.383784  875440 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:38.383808  875440 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:38.383833  875440 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:38.383861  875440 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:38.383873  875440 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:38.384212  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:38.384266  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:38.401402  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0429 12:54:38.401945  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:38.402522  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:38.402553  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:38.402925  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:38.403191  875440 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:38.405200  875440 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:38.405221  875440 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:38.405663  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:38.405719  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:38.422799  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0429 12:54:38.423275  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:38.423805  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:38.423831  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:38.424167  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:38.424415  875440 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:38.427434  875440 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:38.427923  875440 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:38.427954  875440 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:38.428099  875440 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:38.428714  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:38.428780  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:38.446138  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I0429 12:54:38.446659  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:38.447307  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:38.447338  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:38.447714  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:38.447924  875440 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:38.448119  875440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:38.448141  875440 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:38.451454  875440 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:38.451941  875440 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:38.451974  875440 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:38.452189  875440 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:38.452423  875440 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:38.452607  875440 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:38.452803  875440 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:38.535951  875440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:38.554252  875440 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:38.554294  875440 api_server.go:166] Checking apiserver status ...
	I0429 12:54:38.554343  875440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:38.572588  875440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:38.585676  875440 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:38.585778  875440 ssh_runner.go:195] Run: ls
	I0429 12:54:38.592132  875440 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:38.597048  875440 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:38.597087  875440 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:38.597101  875440 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:38.597141  875440 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:38.597579  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:38.597627  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:38.615776  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33123
	I0429 12:54:38.616333  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:38.617003  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:38.617035  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:38.617445  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:38.617681  875440 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:38.619425  875440 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:38.619444  875440 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:38.619796  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:38.619868  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:38.637193  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I0429 12:54:38.637803  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:38.638408  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:38.638440  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:38.638867  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:38.639101  875440 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:38.642414  875440 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:38.642941  875440 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:38.642981  875440 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:38.643263  875440 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:38.643729  875440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:38.643861  875440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:38.660177  875440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37227
	I0429 12:54:38.660625  875440 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:38.661171  875440 main.go:141] libmachine: Using API Version  1
	I0429 12:54:38.661197  875440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:38.661535  875440 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:38.661739  875440 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:38.661920  875440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:38.661938  875440 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:38.665333  875440 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:38.665715  875440 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:38.665760  875440 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:38.665925  875440 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:38.666158  875440 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:38.666324  875440 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:38.666466  875440 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:38.749146  875440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:38.766118  875440 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (3.804728301s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:43.700446  875557 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:43.700652  875557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:43.700662  875557 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:43.700668  875557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:43.700976  875557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:43.701249  875557 out.go:298] Setting JSON to false
	I0429 12:54:43.701288  875557 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:43.701408  875557 notify.go:220] Checking for updates...
	I0429 12:54:43.701831  875557 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:43.701850  875557 status.go:255] checking status of ha-212075 ...
	I0429 12:54:43.702443  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:43.702531  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:43.720893  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I0429 12:54:43.721481  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:43.722049  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:43.722081  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:43.722518  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:43.722721  875557 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:43.724515  875557 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:43.724536  875557 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:43.724987  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:43.725051  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:43.743577  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I0429 12:54:43.744062  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:43.744616  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:43.744650  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:43.744998  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:43.745186  875557 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:43.748358  875557 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:43.748829  875557 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:43.748864  875557 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:43.749030  875557 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:43.749504  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:43.749570  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:43.766267  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0429 12:54:43.766827  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:43.767522  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:43.767549  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:43.767866  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:43.768063  875557 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:43.768307  875557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:43.768349  875557 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:43.771606  875557 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:43.772112  875557 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:43.772142  875557 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:43.772487  875557 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:43.772769  875557 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:43.772960  875557 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:43.773129  875557 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:43.872299  875557 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:43.879271  875557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:43.896216  875557 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:43.896249  875557 api_server.go:166] Checking apiserver status ...
	I0429 12:54:43.896287  875557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:43.912832  875557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:43.924294  875557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:43.924366  875557 ssh_runner.go:195] Run: ls
	I0429 12:54:43.929766  875557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:43.938353  875557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:43.938394  875557 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:43.938407  875557 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:43.938428  875557 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:43.938788  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:43.938846  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:43.955116  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I0429 12:54:43.955694  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:43.956288  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:43.956318  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:43.956730  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:43.956938  875557 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:43.958842  875557 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 12:54:43.958870  875557 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:43.959222  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:43.959267  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:43.975292  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0429 12:54:43.975879  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:43.976435  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:43.976465  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:43.976852  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:43.977071  875557 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:54:43.980296  875557 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:43.980774  875557 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:43.980804  875557 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:43.981010  875557 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 12:54:43.981335  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:43.981379  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:43.997974  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
	I0429 12:54:43.998551  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:43.999137  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:43.999164  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:43.999583  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:43.999866  875557 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:54:44.000067  875557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:44.000094  875557 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:54:44.003403  875557 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:44.003854  875557 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:54:44.003895  875557 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:54:44.004063  875557 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:54:44.004288  875557 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:54:44.004442  875557 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:54:44.004567  875557 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	W0429 12:54:47.055635  875557 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.36:22: connect: no route to host
	W0429 12:54:47.055762  875557 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0429 12:54:47.055779  875557 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:47.055790  875557 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:54:47.055810  875557 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	I0429 12:54:47.055818  875557 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:47.056139  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:47.056185  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:47.072494  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0429 12:54:47.073074  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:47.073667  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:47.073696  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:47.074057  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:47.074250  875557 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:47.076444  875557 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:47.076473  875557 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:47.076885  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:47.076947  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:47.093197  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0429 12:54:47.093844  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:47.094362  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:47.094389  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:47.094715  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:47.094930  875557 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:47.098509  875557 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:47.099154  875557 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:47.099192  875557 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:47.099408  875557 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:47.099756  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:47.099801  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:47.118681  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43553
	I0429 12:54:47.119226  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:47.119843  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:47.119869  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:47.120289  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:47.120534  875557 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:47.120781  875557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:47.120808  875557 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:47.124493  875557 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:47.124982  875557 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:47.125012  875557 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:47.125175  875557 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:47.125392  875557 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:47.125582  875557 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:47.125728  875557 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:47.212421  875557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:47.231089  875557 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:47.231123  875557 api_server.go:166] Checking apiserver status ...
	I0429 12:54:47.231157  875557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:47.248447  875557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:47.262056  875557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:47.262129  875557 ssh_runner.go:195] Run: ls
	I0429 12:54:47.267878  875557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:47.274220  875557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:47.274257  875557 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:47.274271  875557 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:47.274295  875557 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:47.274626  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:47.274680  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:47.291485  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0429 12:54:47.292005  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:47.292580  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:47.292612  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:47.292960  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:47.293213  875557 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:47.295068  875557 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:47.295088  875557 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:47.295399  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:47.295447  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:47.311437  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41501
	I0429 12:54:47.311966  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:47.312478  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:47.312503  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:47.312850  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:47.313065  875557 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:47.316906  875557 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:47.317368  875557 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:47.317411  875557 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:47.317578  875557 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:47.317980  875557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:47.318047  875557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:47.334248  875557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0429 12:54:47.334695  875557 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:47.335233  875557 main.go:141] libmachine: Using API Version  1
	I0429 12:54:47.335262  875557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:47.335713  875557 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:47.335952  875557 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:47.336153  875557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:47.336174  875557 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:47.339834  875557 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:47.340383  875557 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:47.340584  875557 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:47.340662  875557 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:47.340809  875557 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:47.340983  875557 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:47.341118  875557 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:47.424010  875557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:47.440607  875557 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 7 (689.417946ms)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:54:55.003051  875695 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:54:55.003351  875695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:55.003378  875695 out.go:304] Setting ErrFile to fd 2...
	I0429 12:54:55.003385  875695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:54:55.003590  875695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:54:55.003797  875695 out.go:298] Setting JSON to false
	I0429 12:54:55.003832  875695 mustload.go:65] Loading cluster: ha-212075
	I0429 12:54:55.003982  875695 notify.go:220] Checking for updates...
	I0429 12:54:55.004432  875695 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:54:55.004454  875695 status.go:255] checking status of ha-212075 ...
	I0429 12:54:55.004941  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.005011  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.022581  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34201
	I0429 12:54:55.023163  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.023857  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.023884  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.024416  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.024675  875695 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:54:55.026461  875695 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:54:55.026486  875695 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:55.026920  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.026986  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.043508  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45415
	I0429 12:54:55.044028  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.044524  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.044549  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.044902  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.045111  875695 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:54:55.048074  875695 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:55.048491  875695 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:55.048519  875695 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:55.048732  875695 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:54:55.049045  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.049084  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.064784  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I0429 12:54:55.065175  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.065657  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.065681  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.066051  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.066283  875695 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:54:55.066504  875695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:55.066532  875695 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:54:55.070075  875695 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:55.070600  875695 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:54:55.070645  875695 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:54:55.070822  875695 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:54:55.071061  875695 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:54:55.071279  875695 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:54:55.071488  875695 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:54:55.165641  875695 ssh_runner.go:195] Run: systemctl --version
	I0429 12:54:55.176628  875695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:55.194684  875695 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:55.194728  875695 api_server.go:166] Checking apiserver status ...
	I0429 12:54:55.194772  875695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:55.209965  875695 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:54:55.221184  875695 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:55.221262  875695 ssh_runner.go:195] Run: ls
	I0429 12:54:55.226493  875695 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:55.232722  875695 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:55.232758  875695 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:54:55.232773  875695 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:55.232796  875695 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:54:55.233116  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.233152  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.250094  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0429 12:54:55.250575  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.251141  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.251172  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.251570  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.251776  875695 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:54:55.253683  875695 status.go:330] ha-212075-m02 host status = "Stopped" (err=<nil>)
	I0429 12:54:55.253706  875695 status.go:343] host is not running, skipping remaining checks
	I0429 12:54:55.253715  875695 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:55.253742  875695 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:54:55.254189  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.254247  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.270276  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I0429 12:54:55.270848  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.271468  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.271495  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.271814  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.272008  875695 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:54:55.273871  875695 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:54:55.273893  875695 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:55.274194  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.274219  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.291300  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39631
	I0429 12:54:55.291797  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.292292  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.292313  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.292619  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.292827  875695 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:54:55.295778  875695 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:55.296288  875695 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:55.296323  875695 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:55.296446  875695 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:54:55.296776  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.296830  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.316168  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36513
	I0429 12:54:55.316630  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.317285  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.317317  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.317693  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.317909  875695 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:54:55.318143  875695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:55.318165  875695 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:54:55.321339  875695 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:55.321734  875695 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:54:55.321764  875695 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:54:55.322022  875695 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:54:55.322239  875695 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:54:55.322418  875695 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:54:55.322532  875695 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:54:55.408072  875695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:55.426448  875695 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:54:55.426487  875695 api_server.go:166] Checking apiserver status ...
	I0429 12:54:55.426526  875695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:54:55.442042  875695 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:54:55.453233  875695 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:54:55.453324  875695 ssh_runner.go:195] Run: ls
	I0429 12:54:55.458609  875695 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:54:55.463179  875695 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:54:55.463216  875695 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:54:55.463237  875695 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:54:55.463263  875695 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:54:55.463645  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.463704  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.480762  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0429 12:54:55.481285  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.481776  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.481800  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.482141  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.482344  875695 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:54:55.484280  875695 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:54:55.484306  875695 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:55.484659  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.484704  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.500741  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0429 12:54:55.501235  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.501812  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.501833  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.502149  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.502414  875695 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:54:55.505498  875695 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:55.506027  875695 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:55.506063  875695 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:55.506245  875695 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:54:55.506683  875695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:54:55.506718  875695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:54:55.523383  875695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0429 12:54:55.523872  875695 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:54:55.524375  875695 main.go:141] libmachine: Using API Version  1
	I0429 12:54:55.524405  875695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:54:55.524748  875695 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:54:55.524977  875695 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:54:55.525202  875695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:54:55.525229  875695 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:54:55.528224  875695 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:55.528669  875695 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:54:55.528705  875695 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:54:55.528857  875695 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:54:55.529042  875695 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:54:55.529200  875695 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:54:55.529337  875695 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:54:55.611446  875695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:54:55.627207  875695 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 7 (689.406211ms)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-212075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:55:01.906098  875784 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:55:01.906252  875784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:55:01.906262  875784 out.go:304] Setting ErrFile to fd 2...
	I0429 12:55:01.906266  875784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:55:01.906489  875784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:55:01.906758  875784 out.go:298] Setting JSON to false
	I0429 12:55:01.906790  875784 mustload.go:65] Loading cluster: ha-212075
	I0429 12:55:01.906920  875784 notify.go:220] Checking for updates...
	I0429 12:55:01.907298  875784 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:55:01.907317  875784 status.go:255] checking status of ha-212075 ...
	I0429 12:55:01.907783  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:01.907849  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:01.927071  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0429 12:55:01.927760  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:01.928457  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:01.928487  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:01.928850  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:01.929054  875784 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:55:01.931335  875784 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 12:55:01.931381  875784 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:55:01.931798  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:01.931874  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:01.950263  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0429 12:55:01.950763  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:01.951565  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:01.951597  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:01.952052  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:01.952296  875784 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:55:01.955713  875784 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:55:01.956142  875784 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:55:01.956176  875784 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:55:01.956449  875784 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:55:01.956775  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:01.956833  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:01.973924  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0429 12:55:01.974467  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:01.975076  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:01.975107  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:01.975481  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:01.975691  875784 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:55:01.975925  875784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:55:01.975978  875784 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:55:01.979094  875784 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:55:01.979559  875784 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:55:01.979595  875784 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:55:01.979768  875784 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:55:01.979971  875784 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:55:01.980137  875784 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:55:01.980345  875784 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:55:02.068735  875784 ssh_runner.go:195] Run: systemctl --version
	I0429 12:55:02.079048  875784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:55:02.096565  875784 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:55:02.096601  875784 api_server.go:166] Checking apiserver status ...
	I0429 12:55:02.096640  875784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:55:02.112968  875784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0429 12:55:02.125606  875784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:55:02.125686  875784 ssh_runner.go:195] Run: ls
	I0429 12:55:02.130571  875784 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:55:02.135855  875784 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:55:02.135890  875784 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 12:55:02.135901  875784 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:55:02.135923  875784 status.go:255] checking status of ha-212075-m02 ...
	I0429 12:55:02.136250  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.136279  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.153026  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0429 12:55:02.153631  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.154204  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.154236  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.154569  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.154779  875784 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:55:02.156681  875784 status.go:330] ha-212075-m02 host status = "Stopped" (err=<nil>)
	I0429 12:55:02.156701  875784 status.go:343] host is not running, skipping remaining checks
	I0429 12:55:02.156707  875784 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:55:02.156729  875784 status.go:255] checking status of ha-212075-m03 ...
	I0429 12:55:02.157075  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.157115  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.172820  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36953
	I0429 12:55:02.173430  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.173991  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.174021  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.174370  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.174571  875784 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:55:02.176340  875784 status.go:330] ha-212075-m03 host status = "Running" (err=<nil>)
	I0429 12:55:02.176360  875784 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:55:02.176681  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.176748  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.193059  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0429 12:55:02.193561  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.194203  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.194240  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.194640  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.194874  875784 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:55:02.197767  875784 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:55:02.198216  875784 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:55:02.198249  875784 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:55:02.198552  875784 host.go:66] Checking if "ha-212075-m03" exists ...
	I0429 12:55:02.198886  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.198914  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.216741  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0429 12:55:02.217216  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.217659  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.217692  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.217997  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.218191  875784 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:55:02.218386  875784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:55:02.218410  875784 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:55:02.221508  875784 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:55:02.221995  875784 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:55:02.222033  875784 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:55:02.222216  875784 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:55:02.222417  875784 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:55:02.222605  875784 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:55:02.222762  875784 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:55:02.308228  875784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:55:02.326011  875784 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 12:55:02.326045  875784 api_server.go:166] Checking apiserver status ...
	I0429 12:55:02.326092  875784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:55:02.345047  875784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0429 12:55:02.356922  875784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:55:02.357041  875784 ssh_runner.go:195] Run: ls
	I0429 12:55:02.362417  875784 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:55:02.367046  875784 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:55:02.367089  875784 status.go:422] ha-212075-m03 apiserver status = Running (err=<nil>)
	I0429 12:55:02.367101  875784 status.go:257] ha-212075-m03 status: &{Name:ha-212075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:55:02.367120  875784 status.go:255] checking status of ha-212075-m04 ...
	I0429 12:55:02.367588  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.367627  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.383748  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I0429 12:55:02.384205  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.384705  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.384726  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.385152  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.385376  875784 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:55:02.387081  875784 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 12:55:02.387102  875784 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:55:02.387427  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.387458  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.403521  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0429 12:55:02.404068  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.404604  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.404630  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.404963  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.405175  875784 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 12:55:02.408257  875784 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:55:02.408783  875784 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:55:02.408806  875784 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:55:02.408947  875784 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 12:55:02.409267  875784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:02.409308  875784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:02.427504  875784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0429 12:55:02.428039  875784 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:02.428570  875784 main.go:141] libmachine: Using API Version  1
	I0429 12:55:02.428593  875784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:02.429005  875784 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:02.429182  875784 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:55:02.429466  875784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:55:02.429501  875784 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:55:02.432880  875784 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:55:02.433491  875784 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:55:02.433559  875784 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:55:02.433717  875784 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:55:02.433963  875784 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:55:02.434142  875784 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:55:02.434302  875784 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:55:02.515911  875784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:55:02.531180  875784 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-212075 -n ha-212075
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-212075 logs -n 25: (1.579075574s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075:/home/docker/cp-test_ha-212075-m03_ha-212075.txt                       |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075 sudo cat                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075.txt                                 |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m04 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp testdata/cp-test.txt                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075:/home/docker/cp-test_ha-212075-m04_ha-212075.txt                       |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075 sudo cat                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075.txt                                 |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03:/home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m03 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-212075 node stop m02 -v=7                                                     | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-212075 node start m02 -v=7                                                    | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:47:10
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:47:10.677919  870218 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:47:10.678233  870218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:47:10.678243  870218 out.go:304] Setting ErrFile to fd 2...
	I0429 12:47:10.678248  870218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:47:10.678446  870218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:47:10.679112  870218 out.go:298] Setting JSON to false
	I0429 12:47:10.680123  870218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":77376,"bootTime":1714317455,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:47:10.680195  870218 start.go:139] virtualization: kvm guest
	I0429 12:47:10.682364  870218 out.go:177] * [ha-212075] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:47:10.683575  870218 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:47:10.683620  870218 notify.go:220] Checking for updates...
	I0429 12:47:10.684719  870218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:47:10.686075  870218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:47:10.687233  870218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:47:10.688452  870218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:47:10.689537  870218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:47:10.690735  870218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:47:10.726918  870218 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 12:47:10.728083  870218 start.go:297] selected driver: kvm2
	I0429 12:47:10.728096  870218 start.go:901] validating driver "kvm2" against <nil>
	I0429 12:47:10.728109  870218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:47:10.728816  870218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:47:10.728911  870218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:47:10.744767  870218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:47:10.744835  870218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:47:10.745104  870218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:47:10.745163  870218 cni.go:84] Creating CNI manager for ""
	I0429 12:47:10.745175  870218 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:47:10.745180  870218 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:47:10.745248  870218 start.go:340] cluster config:
	{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:47:10.745350  870218 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:47:10.747127  870218 out.go:177] * Starting "ha-212075" primary control-plane node in "ha-212075" cluster
	I0429 12:47:10.748332  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:47:10.748369  870218 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 12:47:10.748377  870218 cache.go:56] Caching tarball of preloaded images
	I0429 12:47:10.748457  870218 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:47:10.748467  870218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:47:10.748770  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:47:10.748791  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json: {Name:mkcbad01c1c0b2ec15b4df8b0dfb07d2b34331f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:10.749013  870218 start.go:360] acquireMachinesLock for ha-212075: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:47:10.749049  870218 start.go:364] duration metric: took 18.822µs to acquireMachinesLock for "ha-212075"
	I0429 12:47:10.749068  870218 start.go:93] Provisioning new machine with config: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:47:10.749132  870218 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 12:47:10.750711  870218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:47:10.750854  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:47:10.750892  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:47:10.766284  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0429 12:47:10.766814  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:47:10.767483  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:47:10.767507  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:47:10.767857  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:47:10.768171  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:10.768384  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:10.768534  870218 start.go:159] libmachine.API.Create for "ha-212075" (driver="kvm2")
	I0429 12:47:10.768583  870218 client.go:168] LocalClient.Create starting
	I0429 12:47:10.768617  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 12:47:10.768656  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:47:10.768671  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:47:10.768720  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 12:47:10.768743  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:47:10.768756  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:47:10.768775  870218 main.go:141] libmachine: Running pre-create checks...
	I0429 12:47:10.768787  870218 main.go:141] libmachine: (ha-212075) Calling .PreCreateCheck
	I0429 12:47:10.769168  870218 main.go:141] libmachine: (ha-212075) Calling .GetConfigRaw
	I0429 12:47:10.769571  870218 main.go:141] libmachine: Creating machine...
	I0429 12:47:10.769586  870218 main.go:141] libmachine: (ha-212075) Calling .Create
	I0429 12:47:10.769732  870218 main.go:141] libmachine: (ha-212075) Creating KVM machine...
	I0429 12:47:10.771123  870218 main.go:141] libmachine: (ha-212075) DBG | found existing default KVM network
	I0429 12:47:10.771904  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:10.771751  870241 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0429 12:47:10.771972  870218 main.go:141] libmachine: (ha-212075) DBG | created network xml: 
	I0429 12:47:10.771992  870218 main.go:141] libmachine: (ha-212075) DBG | <network>
	I0429 12:47:10.772001  870218 main.go:141] libmachine: (ha-212075) DBG |   <name>mk-ha-212075</name>
	I0429 12:47:10.772006  870218 main.go:141] libmachine: (ha-212075) DBG |   <dns enable='no'/>
	I0429 12:47:10.772013  870218 main.go:141] libmachine: (ha-212075) DBG |   
	I0429 12:47:10.772019  870218 main.go:141] libmachine: (ha-212075) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 12:47:10.772027  870218 main.go:141] libmachine: (ha-212075) DBG |     <dhcp>
	I0429 12:47:10.772033  870218 main.go:141] libmachine: (ha-212075) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 12:47:10.772045  870218 main.go:141] libmachine: (ha-212075) DBG |     </dhcp>
	I0429 12:47:10.772053  870218 main.go:141] libmachine: (ha-212075) DBG |   </ip>
	I0429 12:47:10.772063  870218 main.go:141] libmachine: (ha-212075) DBG |   
	I0429 12:47:10.772068  870218 main.go:141] libmachine: (ha-212075) DBG | </network>
	I0429 12:47:10.772075  870218 main.go:141] libmachine: (ha-212075) DBG | 
	I0429 12:47:10.777807  870218 main.go:141] libmachine: (ha-212075) DBG | trying to create private KVM network mk-ha-212075 192.168.39.0/24...
	I0429 12:47:10.853260  870218 main.go:141] libmachine: (ha-212075) DBG | private KVM network mk-ha-212075 192.168.39.0/24 created
	I0429 12:47:10.853342  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:10.853194  870241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:47:10.853364  870218 main.go:141] libmachine: (ha-212075) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075 ...
	I0429 12:47:10.853387  870218 main.go:141] libmachine: (ha-212075) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:47:10.853475  870218 main.go:141] libmachine: (ha-212075) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:47:11.125251  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:11.125115  870241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa...
	I0429 12:47:11.350613  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:11.350414  870241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/ha-212075.rawdisk...
	I0429 12:47:11.350656  870218 main.go:141] libmachine: (ha-212075) DBG | Writing magic tar header
	I0429 12:47:11.350671  870218 main.go:141] libmachine: (ha-212075) DBG | Writing SSH key tar header
	I0429 12:47:11.350683  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:11.350536  870241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075 ...
	I0429 12:47:11.350697  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075
	I0429 12:47:11.350709  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 12:47:11.350722  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075 (perms=drwx------)
	I0429 12:47:11.350739  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:47:11.350747  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:47:11.350754  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 12:47:11.350764  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 12:47:11.350775  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:47:11.350787  870218 main.go:141] libmachine: (ha-212075) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:47:11.350799  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 12:47:11.350806  870218 main.go:141] libmachine: (ha-212075) Creating domain...
	I0429 12:47:11.350818  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:47:11.350832  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:47:11.350922  870218 main.go:141] libmachine: (ha-212075) DBG | Checking permissions on dir: /home
	I0429 12:47:11.350949  870218 main.go:141] libmachine: (ha-212075) DBG | Skipping /home - not owner
	I0429 12:47:11.352182  870218 main.go:141] libmachine: (ha-212075) define libvirt domain using xml: 
	I0429 12:47:11.352232  870218 main.go:141] libmachine: (ha-212075) <domain type='kvm'>
	I0429 12:47:11.352268  870218 main.go:141] libmachine: (ha-212075)   <name>ha-212075</name>
	I0429 12:47:11.352294  870218 main.go:141] libmachine: (ha-212075)   <memory unit='MiB'>2200</memory>
	I0429 12:47:11.352305  870218 main.go:141] libmachine: (ha-212075)   <vcpu>2</vcpu>
	I0429 12:47:11.352316  870218 main.go:141] libmachine: (ha-212075)   <features>
	I0429 12:47:11.352326  870218 main.go:141] libmachine: (ha-212075)     <acpi/>
	I0429 12:47:11.352337  870218 main.go:141] libmachine: (ha-212075)     <apic/>
	I0429 12:47:11.352346  870218 main.go:141] libmachine: (ha-212075)     <pae/>
	I0429 12:47:11.352372  870218 main.go:141] libmachine: (ha-212075)     
	I0429 12:47:11.352384  870218 main.go:141] libmachine: (ha-212075)   </features>
	I0429 12:47:11.352396  870218 main.go:141] libmachine: (ha-212075)   <cpu mode='host-passthrough'>
	I0429 12:47:11.352405  870218 main.go:141] libmachine: (ha-212075)   
	I0429 12:47:11.352415  870218 main.go:141] libmachine: (ha-212075)   </cpu>
	I0429 12:47:11.352424  870218 main.go:141] libmachine: (ha-212075)   <os>
	I0429 12:47:11.352435  870218 main.go:141] libmachine: (ha-212075)     <type>hvm</type>
	I0429 12:47:11.352446  870218 main.go:141] libmachine: (ha-212075)     <boot dev='cdrom'/>
	I0429 12:47:11.352456  870218 main.go:141] libmachine: (ha-212075)     <boot dev='hd'/>
	I0429 12:47:11.352472  870218 main.go:141] libmachine: (ha-212075)     <bootmenu enable='no'/>
	I0429 12:47:11.352482  870218 main.go:141] libmachine: (ha-212075)   </os>
	I0429 12:47:11.352492  870218 main.go:141] libmachine: (ha-212075)   <devices>
	I0429 12:47:11.352506  870218 main.go:141] libmachine: (ha-212075)     <disk type='file' device='cdrom'>
	I0429 12:47:11.352528  870218 main.go:141] libmachine: (ha-212075)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/boot2docker.iso'/>
	I0429 12:47:11.352546  870218 main.go:141] libmachine: (ha-212075)       <target dev='hdc' bus='scsi'/>
	I0429 12:47:11.352558  870218 main.go:141] libmachine: (ha-212075)       <readonly/>
	I0429 12:47:11.352571  870218 main.go:141] libmachine: (ha-212075)     </disk>
	I0429 12:47:11.352580  870218 main.go:141] libmachine: (ha-212075)     <disk type='file' device='disk'>
	I0429 12:47:11.352595  870218 main.go:141] libmachine: (ha-212075)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:47:11.352614  870218 main.go:141] libmachine: (ha-212075)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/ha-212075.rawdisk'/>
	I0429 12:47:11.352628  870218 main.go:141] libmachine: (ha-212075)       <target dev='hda' bus='virtio'/>
	I0429 12:47:11.352643  870218 main.go:141] libmachine: (ha-212075)     </disk>
	I0429 12:47:11.352660  870218 main.go:141] libmachine: (ha-212075)     <interface type='network'>
	I0429 12:47:11.352671  870218 main.go:141] libmachine: (ha-212075)       <source network='mk-ha-212075'/>
	I0429 12:47:11.352679  870218 main.go:141] libmachine: (ha-212075)       <model type='virtio'/>
	I0429 12:47:11.352691  870218 main.go:141] libmachine: (ha-212075)     </interface>
	I0429 12:47:11.352704  870218 main.go:141] libmachine: (ha-212075)     <interface type='network'>
	I0429 12:47:11.352714  870218 main.go:141] libmachine: (ha-212075)       <source network='default'/>
	I0429 12:47:11.352726  870218 main.go:141] libmachine: (ha-212075)       <model type='virtio'/>
	I0429 12:47:11.352736  870218 main.go:141] libmachine: (ha-212075)     </interface>
	I0429 12:47:11.352747  870218 main.go:141] libmachine: (ha-212075)     <serial type='pty'>
	I0429 12:47:11.352760  870218 main.go:141] libmachine: (ha-212075)       <target port='0'/>
	I0429 12:47:11.352770  870218 main.go:141] libmachine: (ha-212075)     </serial>
	I0429 12:47:11.352783  870218 main.go:141] libmachine: (ha-212075)     <console type='pty'>
	I0429 12:47:11.352796  870218 main.go:141] libmachine: (ha-212075)       <target type='serial' port='0'/>
	I0429 12:47:11.352807  870218 main.go:141] libmachine: (ha-212075)     </console>
	I0429 12:47:11.352818  870218 main.go:141] libmachine: (ha-212075)     <rng model='virtio'>
	I0429 12:47:11.352833  870218 main.go:141] libmachine: (ha-212075)       <backend model='random'>/dev/random</backend>
	I0429 12:47:11.352844  870218 main.go:141] libmachine: (ha-212075)     </rng>
	I0429 12:47:11.352851  870218 main.go:141] libmachine: (ha-212075)     
	I0429 12:47:11.352858  870218 main.go:141] libmachine: (ha-212075)     
	I0429 12:47:11.352867  870218 main.go:141] libmachine: (ha-212075)   </devices>
	I0429 12:47:11.352879  870218 main.go:141] libmachine: (ha-212075) </domain>
	I0429 12:47:11.352888  870218 main.go:141] libmachine: (ha-212075) 
	I0429 12:47:11.358129  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:b9:e2:15 in network default
	I0429 12:47:11.358761  870218 main.go:141] libmachine: (ha-212075) Ensuring networks are active...
	I0429 12:47:11.358785  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:11.359625  870218 main.go:141] libmachine: (ha-212075) Ensuring network default is active
	I0429 12:47:11.359939  870218 main.go:141] libmachine: (ha-212075) Ensuring network mk-ha-212075 is active
	I0429 12:47:11.360450  870218 main.go:141] libmachine: (ha-212075) Getting domain xml...
	I0429 12:47:11.361219  870218 main.go:141] libmachine: (ha-212075) Creating domain...
	I0429 12:47:12.584589  870218 main.go:141] libmachine: (ha-212075) Waiting to get IP...
	I0429 12:47:12.585394  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:12.585797  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:12.585868  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:12.585792  870241 retry.go:31] will retry after 305.881234ms: waiting for machine to come up
	I0429 12:47:12.893551  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:12.894049  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:12.894079  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:12.894024  870241 retry.go:31] will retry after 344.55293ms: waiting for machine to come up
	I0429 12:47:13.241013  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:13.241469  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:13.241496  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:13.241434  870241 retry.go:31] will retry after 343.048472ms: waiting for machine to come up
	I0429 12:47:13.586141  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:13.586605  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:13.586654  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:13.586558  870241 retry.go:31] will retry after 450.225843ms: waiting for machine to come up
	I0429 12:47:14.038240  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:14.038757  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:14.038783  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:14.038699  870241 retry.go:31] will retry after 523.602131ms: waiting for machine to come up
	I0429 12:47:14.563556  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:14.564014  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:14.564045  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:14.563929  870241 retry.go:31] will retry after 805.259699ms: waiting for machine to come up
	I0429 12:47:15.371056  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:15.371475  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:15.371526  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:15.371456  870241 retry.go:31] will retry after 966.64669ms: waiting for machine to come up
	I0429 12:47:16.339433  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:16.339834  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:16.339867  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:16.339785  870241 retry.go:31] will retry after 1.23057243s: waiting for machine to come up
	I0429 12:47:17.572420  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:17.572903  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:17.572937  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:17.572841  870241 retry.go:31] will retry after 1.383346304s: waiting for machine to come up
	I0429 12:47:18.958480  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:18.958907  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:18.958936  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:18.958868  870241 retry.go:31] will retry after 1.674064931s: waiting for machine to come up
	I0429 12:47:20.634352  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:20.634768  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:20.634806  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:20.634700  870241 retry.go:31] will retry after 2.486061293s: waiting for machine to come up
	I0429 12:47:23.122390  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:23.122875  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:23.122898  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:23.122835  870241 retry.go:31] will retry after 2.897978896s: waiting for machine to come up
	I0429 12:47:26.022310  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:26.022740  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:26.022767  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:26.022711  870241 retry.go:31] will retry after 2.882393702s: waiting for machine to come up
	I0429 12:47:28.908794  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:28.909215  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find current IP address of domain ha-212075 in network mk-ha-212075
	I0429 12:47:28.909242  870218 main.go:141] libmachine: (ha-212075) DBG | I0429 12:47:28.909174  870241 retry.go:31] will retry after 5.119530721s: waiting for machine to come up
	I0429 12:47:34.030038  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.030480  870218 main.go:141] libmachine: (ha-212075) Found IP for machine: 192.168.39.97
	I0429 12:47:34.030515  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has current primary IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.030535  870218 main.go:141] libmachine: (ha-212075) Reserving static IP address...
	I0429 12:47:34.030917  870218 main.go:141] libmachine: (ha-212075) DBG | unable to find host DHCP lease matching {name: "ha-212075", mac: "52:54:00:c0:56:df", ip: "192.168.39.97"} in network mk-ha-212075
	I0429 12:47:34.119852  870218 main.go:141] libmachine: (ha-212075) DBG | Getting to WaitForSSH function...
	I0429 12:47:34.119883  870218 main.go:141] libmachine: (ha-212075) Reserved static IP address: 192.168.39.97
	I0429 12:47:34.119897  870218 main.go:141] libmachine: (ha-212075) Waiting for SSH to be available...
	I0429 12:47:34.122597  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.123056  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.123087  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.123278  870218 main.go:141] libmachine: (ha-212075) DBG | Using SSH client type: external
	I0429 12:47:34.123304  870218 main.go:141] libmachine: (ha-212075) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa (-rw-------)
	I0429 12:47:34.123350  870218 main.go:141] libmachine: (ha-212075) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:47:34.123423  870218 main.go:141] libmachine: (ha-212075) DBG | About to run SSH command:
	I0429 12:47:34.123437  870218 main.go:141] libmachine: (ha-212075) DBG | exit 0
	I0429 12:47:34.255439  870218 main.go:141] libmachine: (ha-212075) DBG | SSH cmd err, output: <nil>: 
	I0429 12:47:34.255688  870218 main.go:141] libmachine: (ha-212075) KVM machine creation complete!
	I0429 12:47:34.256038  870218 main.go:141] libmachine: (ha-212075) Calling .GetConfigRaw
	I0429 12:47:34.256561  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:34.256768  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:34.256961  870218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:47:34.256974  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:47:34.258344  870218 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:47:34.258360  870218 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:47:34.258367  870218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:47:34.258376  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.260892  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.261301  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.261335  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.261457  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.261727  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.261875  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.261974  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.262143  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.262365  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.262379  870218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:47:34.375070  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:47:34.375104  870218 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:47:34.375116  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.377839  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.378225  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.378270  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.378421  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.378603  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.378801  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.378939  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.379246  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.379463  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.379477  870218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:47:34.496702  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:47:34.496834  870218 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:47:34.496850  870218 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:47:34.496862  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:34.497129  870218 buildroot.go:166] provisioning hostname "ha-212075"
	I0429 12:47:34.497157  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:34.497396  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.500100  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.500522  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.500549  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.500743  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.500984  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.501170  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.501329  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.501491  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.501694  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.501709  870218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075 && echo "ha-212075" | sudo tee /etc/hostname
	I0429 12:47:34.639970  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075
	
	I0429 12:47:34.640000  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.643277  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.643700  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.643736  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.643969  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.644183  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.644395  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.644531  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.644725  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:34.644909  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:34.644929  870218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:47:34.769813  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:47:34.769862  870218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:47:34.769887  870218 buildroot.go:174] setting up certificates
	I0429 12:47:34.769902  870218 provision.go:84] configureAuth start
	I0429 12:47:34.769920  870218 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:47:34.770254  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:34.773213  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.773664  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.773697  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.773877  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.776462  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.776823  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.776853  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.777023  870218 provision.go:143] copyHostCerts
	I0429 12:47:34.777061  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:47:34.777107  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:47:34.777120  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:47:34.777220  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:47:34.777336  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:47:34.777363  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:47:34.777371  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:47:34.777417  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:47:34.777495  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:47:34.777518  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:47:34.777525  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:47:34.777561  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:47:34.777648  870218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075 san=[127.0.0.1 192.168.39.97 ha-212075 localhost minikube]
	I0429 12:47:34.986246  870218 provision.go:177] copyRemoteCerts
	I0429 12:47:34.986315  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:47:34.986343  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:34.989211  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.989554  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:34.989587  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:34.989780  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:34.990033  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:34.990225  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:34.990326  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.077898  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:47:35.078002  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:47:35.103811  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:47:35.103903  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:47:35.130521  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:47:35.130625  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:47:35.158296  870218 provision.go:87] duration metric: took 388.331009ms to configureAuth
	I0429 12:47:35.158348  870218 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:47:35.158647  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:47:35.158755  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.162096  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.162516  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.162550  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.162789  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.163036  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.163228  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.163376  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.163547  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:35.163779  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:35.163806  870218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:47:35.454761  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:47:35.454801  870218 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:47:35.454812  870218 main.go:141] libmachine: (ha-212075) Calling .GetURL
	I0429 12:47:35.456291  870218 main.go:141] libmachine: (ha-212075) DBG | Using libvirt version 6000000
	I0429 12:47:35.459567  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.459976  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.460009  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.460156  870218 main.go:141] libmachine: Docker is up and running!
	I0429 12:47:35.460174  870218 main.go:141] libmachine: Reticulating splines...
	I0429 12:47:35.460182  870218 client.go:171] duration metric: took 24.691589554s to LocalClient.Create
	I0429 12:47:35.460213  870218 start.go:167] duration metric: took 24.691680665s to libmachine.API.Create "ha-212075"
	I0429 12:47:35.460226  870218 start.go:293] postStartSetup for "ha-212075" (driver="kvm2")
	I0429 12:47:35.460240  870218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:47:35.460264  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.460530  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:47:35.460565  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.462997  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.463401  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.463421  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.463619  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.463842  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.463989  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.464114  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.554760  870218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:47:35.559334  870218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:47:35.559381  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:47:35.559459  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:47:35.559534  870218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:47:35.559544  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:47:35.559645  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:47:35.569671  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:47:35.596267  870218 start.go:296] duration metric: took 136.022682ms for postStartSetup
	I0429 12:47:35.596345  870218 main.go:141] libmachine: (ha-212075) Calling .GetConfigRaw
	I0429 12:47:35.596981  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:35.599978  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.600353  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.600383  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.600634  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:47:35.600866  870218 start.go:128] duration metric: took 24.851721937s to createHost
	I0429 12:47:35.600897  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.603199  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.603644  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.603674  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.603745  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.603970  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.604159  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.604339  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.604533  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:35.604712  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:47:35.604729  870218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:47:35.720740  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394855.695919494
	
	I0429 12:47:35.720765  870218 fix.go:216] guest clock: 1714394855.695919494
	I0429 12:47:35.720776  870218 fix.go:229] Guest: 2024-04-29 12:47:35.695919494 +0000 UTC Remote: 2024-04-29 12:47:35.600880557 +0000 UTC m=+24.976033512 (delta=95.038937ms)
	I0429 12:47:35.720806  870218 fix.go:200] guest clock delta is within tolerance: 95.038937ms
	I0429 12:47:35.720812  870218 start.go:83] releasing machines lock for "ha-212075", held for 24.971753151s
	I0429 12:47:35.720837  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.721124  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:35.723665  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.724032  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.724067  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.724221  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.724774  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.724980  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:47:35.725072  870218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:47:35.725118  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.725227  870218 ssh_runner.go:195] Run: cat /version.json
	I0429 12:47:35.725260  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:47:35.728155  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728311  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728537  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.728566  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728669  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:35.728701  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:35.728731  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.728877  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:47:35.728952  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.729132  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.729136  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:47:35.729304  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:47:35.729322  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.729443  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:47:35.837269  870218 ssh_runner.go:195] Run: systemctl --version
	I0429 12:47:35.844092  870218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:47:36.005807  870218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:47:36.012800  870218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:47:36.012902  870218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:47:36.030274  870218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:47:36.030311  870218 start.go:494] detecting cgroup driver to use...
	I0429 12:47:36.030402  870218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:47:36.046974  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:47:36.061895  870218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:47:36.061982  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:47:36.076800  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:47:36.091454  870218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:47:36.211024  870218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:47:36.351716  870218 docker.go:233] disabling docker service ...
	I0429 12:47:36.351802  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:47:36.367728  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:47:36.381746  870218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:47:36.521448  870218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:47:36.643441  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:47:36.658397  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:47:36.678361  870218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:47:36.678431  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.690326  870218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:47:36.690412  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.702156  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.714496  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.726598  870218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:47:36.739087  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.750643  870218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.769859  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:47:36.781534  870218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:47:36.791887  870218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:47:36.791968  870218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:47:36.806869  870218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:47:36.817796  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:36.937337  870218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:47:37.079704  870218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:47:37.079785  870218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:47:37.084919  870218 start.go:562] Will wait 60s for crictl version
	I0429 12:47:37.085054  870218 ssh_runner.go:195] Run: which crictl
	I0429 12:47:37.089276  870218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:47:37.132022  870218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:47:37.132124  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:47:37.162365  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:47:37.194434  870218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:47:37.195855  870218 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:47:37.198800  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:37.199235  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:47:37.199265  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:47:37.199505  870218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:47:37.203973  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:37.217992  870218 kubeadm.go:877] updating cluster {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:47:37.218119  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:47:37.218170  870218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:47:37.254058  870218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 12:47:37.254136  870218 ssh_runner.go:195] Run: which lz4
	I0429 12:47:37.258626  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:47:37.258735  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:47:37.263492  870218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:47:37.263531  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 12:47:38.827807  870218 crio.go:462] duration metric: took 1.569087769s to copy over tarball
	I0429 12:47:38.827894  870218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:47:41.114738  870218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286814073s)
	I0429 12:47:41.114772  870218 crio.go:469] duration metric: took 2.286930667s to extract the tarball
	I0429 12:47:41.114780  870218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:47:41.153797  870218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:47:41.199426  870218 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:47:41.199455  870218 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:47:41.199464  870218 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.30.0 crio true true} ...
	I0429 12:47:41.199578  870218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:47:41.199653  870218 ssh_runner.go:195] Run: crio config
	I0429 12:47:41.250499  870218 cni.go:84] Creating CNI manager for ""
	I0429 12:47:41.250526  870218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:47:41.250537  870218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:47:41.250559  870218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-212075 NodeName:ha-212075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:47:41.250705  870218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-212075"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:47:41.250732  870218 kube-vip.go:111] generating kube-vip config ...
	I0429 12:47:41.250777  870218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:47:41.269494  870218 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:47:41.269627  870218 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:47:41.269685  870218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:47:41.280555  870218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:47:41.280644  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:47:41.291333  870218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:47:41.310105  870218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:47:41.328730  870218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 12:47:41.347634  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0429 12:47:41.365931  870218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:47:41.370302  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:41.383866  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:41.512387  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:47:41.530551  870218 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.97
	I0429 12:47:41.530581  870218 certs.go:194] generating shared ca certs ...
	I0429 12:47:41.530604  870218 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:41.530779  870218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:47:41.530833  870218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:47:41.530848  870218 certs.go:256] generating profile certs ...
	I0429 12:47:41.530915  870218 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:47:41.530951  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt with IP's: []
	I0429 12:47:41.722700  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt ...
	I0429 12:47:41.722741  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt: {Name:mk4f0aba10f064735148f15f887ea67a1137a3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:41.722964  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key ...
	I0429 12:47:41.722982  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key: {Name:mkaee3859a995806ed485f81a0abcc895804c08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:41.723093  870218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294
	I0429 12:47:41.723113  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.254]
	I0429 12:47:42.017824  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294 ...
	I0429 12:47:42.017866  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294: {Name:mk192112794ed2eccfcd600bb5d5c95e549cded1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.018075  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294 ...
	I0429 12:47:42.018095  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294: {Name:mka4cfea4c0ea9fde780611847d7c0973ea6230b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.018201  870218 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.e46b5294 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:47:42.018327  870218 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.e46b5294 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:47:42.018421  870218 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:47:42.018445  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt with IP's: []
	I0429 12:47:42.170382  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt ...
	I0429 12:47:42.170425  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt: {Name:mk2c7c2efcf55e17dae029e8d8b23a5d23f2d657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.170634  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key ...
	I0429 12:47:42.170652  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key: {Name:mk8ce0121b4c42805c1956fc1acf6c7e5ee80e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:42.170754  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:47:42.170777  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:47:42.170794  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:47:42.170816  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:47:42.170839  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:47:42.170869  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:47:42.170896  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:47:42.170915  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:47:42.170984  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:47:42.171032  870218 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:47:42.171059  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:47:42.171091  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:47:42.171131  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:47:42.171221  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:47:42.171302  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:47:42.171349  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.171385  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.171405  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.172051  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:47:42.204925  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:47:42.233610  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:47:42.265193  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:47:42.292433  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:47:42.326458  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:47:42.356962  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:47:42.386083  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:47:42.418703  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:47:42.450096  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:47:42.477246  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:47:42.506751  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:47:42.528244  870218 ssh_runner.go:195] Run: openssl version
	I0429 12:47:42.534899  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:47:42.548693  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.554530  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.554605  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:47:42.561916  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:47:42.575762  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:47:42.589167  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.594723  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.594814  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:47:42.602038  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:47:42.615369  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:47:42.628913  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.634531  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.634595  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:42.641566  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:47:42.655039  870218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:47:42.659883  870218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:47:42.659966  870218 kubeadm.go:391] StartCluster: {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:47:42.660127  870218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 12:47:42.660199  870218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 12:47:42.701855  870218 cri.go:89] found id: ""
	I0429 12:47:42.701934  870218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:47:42.712771  870218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:47:42.723461  870218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:47:42.735142  870218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:47:42.735166  870218 kubeadm.go:156] found existing configuration files:
	
	I0429 12:47:42.735214  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:47:42.745568  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:47:42.745651  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:47:42.756257  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:47:42.767063  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:47:42.767142  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:47:42.778879  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:47:42.789320  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:47:42.789402  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:47:42.800240  870218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:47:42.810530  870218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:47:42.810605  870218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:47:42.821530  870218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:47:42.934280  870218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:47:42.934357  870218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:47:43.080585  870218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:47:43.080749  870218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:47:43.080883  870218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:47:43.332866  870218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:47:43.342245  870218 out.go:204]   - Generating certificates and keys ...
	I0429 12:47:43.342415  870218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:47:43.342525  870218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:47:43.667747  870218 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:47:43.718862  870218 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:47:43.877370  870218 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:47:43.955431  870218 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:47:44.025648  870218 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:47:44.025818  870218 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-212075 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0429 12:47:44.114660  870218 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:47:44.114839  870218 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-212075 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0429 12:47:44.454769  870218 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:47:44.530738  870218 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:47:44.602193  870218 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:47:44.602286  870218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:47:44.691879  870218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:47:44.867537  870218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:47:44.989037  870218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:47:45.099892  870218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:47:45.247531  870218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:47:45.248041  870218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:47:45.251054  870218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:47:45.254247  870218 out.go:204]   - Booting up control plane ...
	I0429 12:47:45.254379  870218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:47:45.254461  870218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:47:45.254540  870218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:47:45.269826  870218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:47:45.270735  870218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:47:45.270791  870218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:47:45.415518  870218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:47:45.415641  870218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:47:46.416954  870218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002166461s
	I0429 12:47:46.417061  870218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:47:52.416938  870218 kubeadm.go:309] [api-check] The API server is healthy after 6.003166313s
	I0429 12:47:52.433395  870218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:47:52.450424  870218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:47:52.481163  870218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:47:52.481367  870218 kubeadm.go:309] [mark-control-plane] Marking the node ha-212075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:47:52.494539  870218 kubeadm.go:309] [bootstrap-token] Using token: oy0k5e.zul1f1ey7gnfr2ai
	I0429 12:47:52.496157  870218 out.go:204]   - Configuring RBAC rules ...
	I0429 12:47:52.496331  870218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:47:52.502259  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:47:52.511107  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:47:52.515407  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:47:52.518870  870218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:47:52.527162  870218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:47:52.824005  870218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:47:53.332771  870218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:47:53.824147  870218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:47:53.825228  870218 kubeadm.go:309] 
	I0429 12:47:53.825297  870218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:47:53.825327  870218 kubeadm.go:309] 
	I0429 12:47:53.825444  870218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:47:53.825457  870218 kubeadm.go:309] 
	I0429 12:47:53.825521  870218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:47:53.825634  870218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:47:53.825715  870218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:47:53.825748  870218 kubeadm.go:309] 
	I0429 12:47:53.825830  870218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:47:53.825842  870218 kubeadm.go:309] 
	I0429 12:47:53.825919  870218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:47:53.825928  870218 kubeadm.go:309] 
	I0429 12:47:53.826008  870218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:47:53.826118  870218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:47:53.826239  870218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:47:53.826252  870218 kubeadm.go:309] 
	I0429 12:47:53.826371  870218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:47:53.826475  870218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:47:53.826494  870218 kubeadm.go:309] 
	I0429 12:47:53.826613  870218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oy0k5e.zul1f1ey7gnfr2ai \
	I0429 12:47:53.826769  870218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 12:47:53.826807  870218 kubeadm.go:309] 	--control-plane 
	I0429 12:47:53.826817  870218 kubeadm.go:309] 
	I0429 12:47:53.826927  870218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:47:53.826938  870218 kubeadm.go:309] 
	I0429 12:47:53.827054  870218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oy0k5e.zul1f1ey7gnfr2ai \
	I0429 12:47:53.827204  870218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 12:47:53.827540  870218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:47:53.827637  870218 cni.go:84] Creating CNI manager for ""
	I0429 12:47:53.827661  870218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:47:53.829635  870218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:47:53.831034  870218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:47:53.837177  870218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:47:53.837201  870218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:47:53.857011  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:47:54.316710  870218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:47:54.316795  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:54.316822  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-212075 minikube.k8s.io/updated_at=2024_04_29T12_47_54_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=ha-212075 minikube.k8s.io/primary=true
	I0429 12:47:54.345797  870218 ops.go:34] apiserver oom_adj: -16
	I0429 12:47:54.496882  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:54.997852  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:55.497076  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:55.997178  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:56.497134  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:56.997042  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:57.497933  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:57.997275  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:58.496967  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:58.997579  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:59.497467  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:47:59.996968  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:00.497065  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:00.996897  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:01.497143  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:01.997850  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:02.497748  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:02.997561  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:03.497638  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:03.997996  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:04.497993  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:04.997647  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:05.497664  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:05.996988  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:06.497690  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:06.997830  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:48:07.106754  870218 kubeadm.go:1107] duration metric: took 12.790030758s to wait for elevateKubeSystemPrivileges
	W0429 12:48:07.106816  870218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:48:07.106828  870218 kubeadm.go:393] duration metric: took 24.446869371s to StartCluster
	I0429 12:48:07.106853  870218 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:07.106931  870218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:48:07.107757  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:07.108054  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:48:07.108076  870218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:48:07.108170  870218 addons.go:69] Setting storage-provisioner=true in profile "ha-212075"
	I0429 12:48:07.108059  870218 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:48:07.108227  870218 addons.go:234] Setting addon storage-provisioner=true in "ha-212075"
	I0429 12:48:07.108236  870218 start.go:240] waiting for startup goroutines ...
	I0429 12:48:07.108272  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:07.108274  870218 addons.go:69] Setting default-storageclass=true in profile "ha-212075"
	I0429 12:48:07.108298  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:07.108318  870218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-212075"
	I0429 12:48:07.108698  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.108711  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.108741  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.108818  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.125835  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0429 12:48:07.125855  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0429 12:48:07.126384  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.126392  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.126889  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.126907  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.127043  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.127070  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.127240  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.127469  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.127687  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:07.127850  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.127884  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.130130  870218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:48:07.130484  870218 kapi.go:59] client config for ha-212075: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt", KeyFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key", CAFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:48:07.131110  870218 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:48:07.131444  870218 addons.go:234] Setting addon default-storageclass=true in "ha-212075"
	I0429 12:48:07.131495  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:07.131891  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.131942  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.145217  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0429 12:48:07.145713  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.146291  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.146318  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.146674  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.146921  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:07.147889  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0429 12:48:07.148424  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.148984  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.149009  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.149027  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:07.151195  870218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:48:07.149400  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.151835  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:07.152779  870218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:48:07.152793  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:48:07.152798  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:07.152812  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:07.156024  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.156453  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:07.156481  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.156627  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:07.156875  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:07.157054  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:07.157212  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:07.169582  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0429 12:48:07.170085  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:07.170658  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:07.170689  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:07.171047  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:07.171293  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:07.172975  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:07.173320  870218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:48:07.173343  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:48:07.173366  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:07.176220  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.176648  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:07.176680  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:07.176863  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:07.177079  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:07.177269  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:07.177438  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:07.266153  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:48:07.412582  870218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:48:07.452705  870218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:48:08.095864  870218 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 12:48:08.460264  870218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.047632478s)
	I0429 12:48:08.460335  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460334  870218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.007583898s)
	I0429 12:48:08.460382  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460397  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.460349  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.460782  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.460798  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.460807  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460814  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.460874  870218 main.go:141] libmachine: (ha-212075) DBG | Closing plugin on server side
	I0429 12:48:08.460931  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.460951  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.460968  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.460979  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.461022  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.461036  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.461284  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.461303  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.461463  870218 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:48:08.461474  870218 round_trippers.go:469] Request Headers:
	I0429 12:48:08.461491  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.461497  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:48:08.472509  870218 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 12:48:08.473158  870218 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:48:08.473177  870218 round_trippers.go:469] Request Headers:
	I0429 12:48:08.473184  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.473189  870218 round_trippers.go:473]     Content-Type: application/json
	I0429 12:48:08.473191  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:48:08.476542  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:08.476725  870218 main.go:141] libmachine: Making call to close driver server
	I0429 12:48:08.476745  870218 main.go:141] libmachine: (ha-212075) Calling .Close
	I0429 12:48:08.477033  870218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:48:08.477052  870218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:48:08.477060  870218 main.go:141] libmachine: (ha-212075) DBG | Closing plugin on server side
	I0429 12:48:08.479794  870218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:48:08.481166  870218 addons.go:505] duration metric: took 1.373080963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:48:08.481216  870218 start.go:245] waiting for cluster config update ...
	I0429 12:48:08.481256  870218 start.go:254] writing updated cluster config ...
	I0429 12:48:08.482945  870218 out.go:177] 
	I0429 12:48:08.484429  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:08.484522  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:48:08.486369  870218 out.go:177] * Starting "ha-212075-m02" control-plane node in "ha-212075" cluster
	I0429 12:48:08.487634  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:48:08.487679  870218 cache.go:56] Caching tarball of preloaded images
	I0429 12:48:08.487783  870218 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:48:08.487798  870218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:48:08.487893  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:48:08.488101  870218 start.go:360] acquireMachinesLock for ha-212075-m02: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:48:08.488155  870218 start.go:364] duration metric: took 29.185µs to acquireMachinesLock for "ha-212075-m02"
	I0429 12:48:08.488180  870218 start.go:93] Provisioning new machine with config: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:48:08.488293  870218 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0429 12:48:08.489965  870218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:48:08.490069  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:08.490097  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:08.506743  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0429 12:48:08.507216  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:08.507765  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:08.507789  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:08.508188  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:08.508403  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:08.508606  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:08.508820  870218 start.go:159] libmachine.API.Create for "ha-212075" (driver="kvm2")
	I0429 12:48:08.508849  870218 client.go:168] LocalClient.Create starting
	I0429 12:48:08.508889  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 12:48:08.508932  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:48:08.508969  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:48:08.509048  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 12:48:08.509075  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:48:08.509094  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:48:08.509130  870218 main.go:141] libmachine: Running pre-create checks...
	I0429 12:48:08.509143  870218 main.go:141] libmachine: (ha-212075-m02) Calling .PreCreateCheck
	I0429 12:48:08.509327  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetConfigRaw
	I0429 12:48:08.509720  870218 main.go:141] libmachine: Creating machine...
	I0429 12:48:08.509736  870218 main.go:141] libmachine: (ha-212075-m02) Calling .Create
	I0429 12:48:08.509878  870218 main.go:141] libmachine: (ha-212075-m02) Creating KVM machine...
	I0429 12:48:08.511329  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found existing default KVM network
	I0429 12:48:08.511504  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found existing private KVM network mk-ha-212075
	I0429 12:48:08.511675  870218 main.go:141] libmachine: (ha-212075-m02) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02 ...
	I0429 12:48:08.511701  870218 main.go:141] libmachine: (ha-212075-m02) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:48:08.511754  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.511637  870647 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:48:08.511870  870218 main.go:141] libmachine: (ha-212075-m02) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:48:08.772309  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.772142  870647 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa...
	I0429 12:48:08.898179  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.898033  870647 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/ha-212075-m02.rawdisk...
	I0429 12:48:08.898251  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Writing magic tar header
	I0429 12:48:08.898270  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Writing SSH key tar header
	I0429 12:48:08.898287  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:08.898155  870647 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02 ...
	I0429 12:48:08.898330  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02
	I0429 12:48:08.898347  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 12:48:08.898361  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02 (perms=drwx------)
	I0429 12:48:08.898387  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:48:08.898401  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 12:48:08.898435  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 12:48:08.898449  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:48:08.898463  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 12:48:08.898477  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:48:08.898494  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:48:08.898504  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Checking permissions on dir: /home
	I0429 12:48:08.898517  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Skipping /home - not owner
	I0429 12:48:08.898531  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:48:08.898551  870218 main.go:141] libmachine: (ha-212075-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:48:08.898567  870218 main.go:141] libmachine: (ha-212075-m02) Creating domain...
	I0429 12:48:08.899629  870218 main.go:141] libmachine: (ha-212075-m02) define libvirt domain using xml: 
	I0429 12:48:08.899656  870218 main.go:141] libmachine: (ha-212075-m02) <domain type='kvm'>
	I0429 12:48:08.899663  870218 main.go:141] libmachine: (ha-212075-m02)   <name>ha-212075-m02</name>
	I0429 12:48:08.899669  870218 main.go:141] libmachine: (ha-212075-m02)   <memory unit='MiB'>2200</memory>
	I0429 12:48:08.899677  870218 main.go:141] libmachine: (ha-212075-m02)   <vcpu>2</vcpu>
	I0429 12:48:08.899684  870218 main.go:141] libmachine: (ha-212075-m02)   <features>
	I0429 12:48:08.899692  870218 main.go:141] libmachine: (ha-212075-m02)     <acpi/>
	I0429 12:48:08.899699  870218 main.go:141] libmachine: (ha-212075-m02)     <apic/>
	I0429 12:48:08.899709  870218 main.go:141] libmachine: (ha-212075-m02)     <pae/>
	I0429 12:48:08.899714  870218 main.go:141] libmachine: (ha-212075-m02)     
	I0429 12:48:08.899733  870218 main.go:141] libmachine: (ha-212075-m02)   </features>
	I0429 12:48:08.899744  870218 main.go:141] libmachine: (ha-212075-m02)   <cpu mode='host-passthrough'>
	I0429 12:48:08.899749  870218 main.go:141] libmachine: (ha-212075-m02)   
	I0429 12:48:08.899763  870218 main.go:141] libmachine: (ha-212075-m02)   </cpu>
	I0429 12:48:08.899774  870218 main.go:141] libmachine: (ha-212075-m02)   <os>
	I0429 12:48:08.899785  870218 main.go:141] libmachine: (ha-212075-m02)     <type>hvm</type>
	I0429 12:48:08.899794  870218 main.go:141] libmachine: (ha-212075-m02)     <boot dev='cdrom'/>
	I0429 12:48:08.899805  870218 main.go:141] libmachine: (ha-212075-m02)     <boot dev='hd'/>
	I0429 12:48:08.899842  870218 main.go:141] libmachine: (ha-212075-m02)     <bootmenu enable='no'/>
	I0429 12:48:08.899866  870218 main.go:141] libmachine: (ha-212075-m02)   </os>
	I0429 12:48:08.899892  870218 main.go:141] libmachine: (ha-212075-m02)   <devices>
	I0429 12:48:08.899912  870218 main.go:141] libmachine: (ha-212075-m02)     <disk type='file' device='cdrom'>
	I0429 12:48:08.899930  870218 main.go:141] libmachine: (ha-212075-m02)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/boot2docker.iso'/>
	I0429 12:48:08.899941  870218 main.go:141] libmachine: (ha-212075-m02)       <target dev='hdc' bus='scsi'/>
	I0429 12:48:08.899950  870218 main.go:141] libmachine: (ha-212075-m02)       <readonly/>
	I0429 12:48:08.899958  870218 main.go:141] libmachine: (ha-212075-m02)     </disk>
	I0429 12:48:08.899964  870218 main.go:141] libmachine: (ha-212075-m02)     <disk type='file' device='disk'>
	I0429 12:48:08.899973  870218 main.go:141] libmachine: (ha-212075-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:48:08.899981  870218 main.go:141] libmachine: (ha-212075-m02)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/ha-212075-m02.rawdisk'/>
	I0429 12:48:08.899992  870218 main.go:141] libmachine: (ha-212075-m02)       <target dev='hda' bus='virtio'/>
	I0429 12:48:08.900009  870218 main.go:141] libmachine: (ha-212075-m02)     </disk>
	I0429 12:48:08.900025  870218 main.go:141] libmachine: (ha-212075-m02)     <interface type='network'>
	I0429 12:48:08.900040  870218 main.go:141] libmachine: (ha-212075-m02)       <source network='mk-ha-212075'/>
	I0429 12:48:08.900051  870218 main.go:141] libmachine: (ha-212075-m02)       <model type='virtio'/>
	I0429 12:48:08.900062  870218 main.go:141] libmachine: (ha-212075-m02)     </interface>
	I0429 12:48:08.900072  870218 main.go:141] libmachine: (ha-212075-m02)     <interface type='network'>
	I0429 12:48:08.900081  870218 main.go:141] libmachine: (ha-212075-m02)       <source network='default'/>
	I0429 12:48:08.900092  870218 main.go:141] libmachine: (ha-212075-m02)       <model type='virtio'/>
	I0429 12:48:08.900107  870218 main.go:141] libmachine: (ha-212075-m02)     </interface>
	I0429 12:48:08.900126  870218 main.go:141] libmachine: (ha-212075-m02)     <serial type='pty'>
	I0429 12:48:08.900139  870218 main.go:141] libmachine: (ha-212075-m02)       <target port='0'/>
	I0429 12:48:08.900149  870218 main.go:141] libmachine: (ha-212075-m02)     </serial>
	I0429 12:48:08.900173  870218 main.go:141] libmachine: (ha-212075-m02)     <console type='pty'>
	I0429 12:48:08.900184  870218 main.go:141] libmachine: (ha-212075-m02)       <target type='serial' port='0'/>
	I0429 12:48:08.900192  870218 main.go:141] libmachine: (ha-212075-m02)     </console>
	I0429 12:48:08.900203  870218 main.go:141] libmachine: (ha-212075-m02)     <rng model='virtio'>
	I0429 12:48:08.900216  870218 main.go:141] libmachine: (ha-212075-m02)       <backend model='random'>/dev/random</backend>
	I0429 12:48:08.900225  870218 main.go:141] libmachine: (ha-212075-m02)     </rng>
	I0429 12:48:08.900234  870218 main.go:141] libmachine: (ha-212075-m02)     
	I0429 12:48:08.900242  870218 main.go:141] libmachine: (ha-212075-m02)     
	I0429 12:48:08.900252  870218 main.go:141] libmachine: (ha-212075-m02)   </devices>
	I0429 12:48:08.900259  870218 main.go:141] libmachine: (ha-212075-m02) </domain>
	I0429 12:48:08.900268  870218 main.go:141] libmachine: (ha-212075-m02) 
	I0429 12:48:08.907946  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:a9:53:79 in network default
	I0429 12:48:08.908582  870218 main.go:141] libmachine: (ha-212075-m02) Ensuring networks are active...
	I0429 12:48:08.908607  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:08.909355  870218 main.go:141] libmachine: (ha-212075-m02) Ensuring network default is active
	I0429 12:48:08.909634  870218 main.go:141] libmachine: (ha-212075-m02) Ensuring network mk-ha-212075 is active
	I0429 12:48:08.910063  870218 main.go:141] libmachine: (ha-212075-m02) Getting domain xml...
	I0429 12:48:08.910889  870218 main.go:141] libmachine: (ha-212075-m02) Creating domain...
	I0429 12:48:10.185813  870218 main.go:141] libmachine: (ha-212075-m02) Waiting to get IP...
	I0429 12:48:10.186939  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:10.187509  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:10.187547  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:10.187468  870647 retry.go:31] will retry after 301.578397ms: waiting for machine to come up
	I0429 12:48:10.491341  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:10.491897  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:10.491932  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:10.491839  870647 retry.go:31] will retry after 321.98325ms: waiting for machine to come up
	I0429 12:48:10.815451  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:10.815808  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:10.815832  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:10.815763  870647 retry.go:31] will retry after 394.050947ms: waiting for machine to come up
	I0429 12:48:11.211473  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:11.211909  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:11.211942  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:11.211849  870647 retry.go:31] will retry after 430.51973ms: waiting for machine to come up
	I0429 12:48:11.644676  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:11.645219  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:11.645248  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:11.645148  870647 retry.go:31] will retry after 709.605764ms: waiting for machine to come up
	I0429 12:48:12.356069  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:12.356525  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:12.356593  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:12.356473  870647 retry.go:31] will retry after 890.075621ms: waiting for machine to come up
	I0429 12:48:13.248841  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:13.249370  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:13.249406  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:13.249304  870647 retry.go:31] will retry after 727.943001ms: waiting for machine to come up
	I0429 12:48:13.978718  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:13.979281  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:13.979316  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:13.979215  870647 retry.go:31] will retry after 945.901335ms: waiting for machine to come up
	I0429 12:48:14.926762  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:14.927398  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:14.927432  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:14.927328  870647 retry.go:31] will retry after 1.459605646s: waiting for machine to come up
	I0429 12:48:16.388522  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:16.388934  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:16.388959  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:16.388887  870647 retry.go:31] will retry after 1.569864244s: waiting for machine to come up
	I0429 12:48:17.960898  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:17.961469  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:17.961500  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:17.961424  870647 retry.go:31] will retry after 2.113218061s: waiting for machine to come up
	I0429 12:48:20.078292  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:20.078741  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:20.078768  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:20.078698  870647 retry.go:31] will retry after 2.352898738s: waiting for machine to come up
	I0429 12:48:22.434312  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:22.434768  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:22.434792  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:22.434720  870647 retry.go:31] will retry after 4.188987093s: waiting for machine to come up
	I0429 12:48:26.627589  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:26.628066  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find current IP address of domain ha-212075-m02 in network mk-ha-212075
	I0429 12:48:26.628098  870218 main.go:141] libmachine: (ha-212075-m02) DBG | I0429 12:48:26.628002  870647 retry.go:31] will retry after 4.959414999s: waiting for machine to come up
	I0429 12:48:31.590773  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.591277  870218 main.go:141] libmachine: (ha-212075-m02) Found IP for machine: 192.168.39.36
	I0429 12:48:31.591299  870218 main.go:141] libmachine: (ha-212075-m02) Reserving static IP address...
	I0429 12:48:31.591312  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has current primary IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.591748  870218 main.go:141] libmachine: (ha-212075-m02) DBG | unable to find host DHCP lease matching {name: "ha-212075-m02", mac: "52:54:00:46:f4:9a", ip: "192.168.39.36"} in network mk-ha-212075
	I0429 12:48:31.678248  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Getting to WaitForSSH function...
	I0429 12:48:31.678313  870218 main.go:141] libmachine: (ha-212075-m02) Reserved static IP address: 192.168.39.36
	I0429 12:48:31.678330  870218 main.go:141] libmachine: (ha-212075-m02) Waiting for SSH to be available...
	I0429 12:48:31.681502  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.682128  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:31.682159  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.682371  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Using SSH client type: external
	I0429 12:48:31.682397  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa (-rw-------)
	I0429 12:48:31.682431  870218 main.go:141] libmachine: (ha-212075-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:48:31.682456  870218 main.go:141] libmachine: (ha-212075-m02) DBG | About to run SSH command:
	I0429 12:48:31.682464  870218 main.go:141] libmachine: (ha-212075-m02) DBG | exit 0
	I0429 12:48:31.811773  870218 main.go:141] libmachine: (ha-212075-m02) DBG | SSH cmd err, output: <nil>: 
	I0429 12:48:31.812004  870218 main.go:141] libmachine: (ha-212075-m02) KVM machine creation complete!
	I0429 12:48:31.812373  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetConfigRaw
	I0429 12:48:31.812982  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:31.813232  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:31.813485  870218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:48:31.813500  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 12:48:31.814808  870218 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:48:31.814824  870218 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:48:31.814830  870218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:48:31.814836  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:31.817101  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.817490  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:31.817519  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.817650  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:31.817845  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.818046  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.818236  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:31.818444  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:31.818670  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:31.818681  870218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:48:31.935100  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:48:31.935130  870218 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:48:31.935138  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:31.938171  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.938493  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:31.938527  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:31.938645  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:31.938880  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.939058  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:31.939226  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:31.939407  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:31.939600  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:31.939613  870218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:48:32.053082  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:48:32.053177  870218 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:48:32.053189  870218 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:48:32.053198  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:32.053466  870218 buildroot.go:166] provisioning hostname "ha-212075-m02"
	I0429 12:48:32.053493  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:32.053731  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.056710  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.057187  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.057213  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.057373  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.057590  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.057787  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.057940  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.058097  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:32.058291  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:32.058303  870218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075-m02 && echo "ha-212075-m02" | sudo tee /etc/hostname
	I0429 12:48:32.186743  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075-m02
	
	I0429 12:48:32.186780  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.190004  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.190426  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.190458  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.190737  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.190924  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.191144  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.191355  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.191580  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:32.191774  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:32.191799  870218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:48:32.313526  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:48:32.313572  870218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:48:32.313594  870218 buildroot.go:174] setting up certificates
	I0429 12:48:32.313607  870218 provision.go:84] configureAuth start
	I0429 12:48:32.313644  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetMachineName
	I0429 12:48:32.314022  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:32.316977  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.317366  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.317400  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.317566  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.319834  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.320185  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.320221  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.320393  870218 provision.go:143] copyHostCerts
	I0429 12:48:32.320430  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:48:32.320465  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:48:32.320475  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:48:32.320540  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:48:32.320633  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:48:32.320657  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:48:32.320664  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:48:32.320689  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:48:32.320743  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:48:32.320761  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:48:32.320767  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:48:32.320792  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:48:32.320890  870218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075-m02 san=[127.0.0.1 192.168.39.36 ha-212075-m02 localhost minikube]
	I0429 12:48:32.428710  870218 provision.go:177] copyRemoteCerts
	I0429 12:48:32.428786  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:48:32.428817  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.431477  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.431790  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.431816  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.431990  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.432223  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.432417  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.432560  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:32.523038  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:48:32.523114  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:48:32.552778  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:48:32.552860  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:48:32.582395  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:48:32.582487  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:48:32.611276  870218 provision.go:87] duration metric: took 297.652353ms to configureAuth
	I0429 12:48:32.611312  870218 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:48:32.611552  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:32.611642  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.614288  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.614655  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.614690  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.614928  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.615179  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.615442  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.615591  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.615741  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:32.615994  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:32.616016  870218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:48:32.889700  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:48:32.889741  870218 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:48:32.889754  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetURL
	I0429 12:48:32.891136  870218 main.go:141] libmachine: (ha-212075-m02) DBG | Using libvirt version 6000000
	I0429 12:48:32.893428  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.893833  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.893868  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.894083  870218 main.go:141] libmachine: Docker is up and running!
	I0429 12:48:32.894098  870218 main.go:141] libmachine: Reticulating splines...
	I0429 12:48:32.894106  870218 client.go:171] duration metric: took 24.3852485s to LocalClient.Create
	I0429 12:48:32.894135  870218 start.go:167] duration metric: took 24.385317751s to libmachine.API.Create "ha-212075"
	I0429 12:48:32.894148  870218 start.go:293] postStartSetup for "ha-212075-m02" (driver="kvm2")
	I0429 12:48:32.894162  870218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:48:32.894212  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:32.894482  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:48:32.894509  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:32.896782  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.897183  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:32.897213  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:32.897359  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:32.897544  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:32.897690  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:32.897796  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:32.982542  870218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:48:32.987133  870218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:48:32.987166  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:48:32.987242  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:48:32.987334  870218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:48:32.987350  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:48:32.987493  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:48:32.997640  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:48:33.023959  870218 start.go:296] duration metric: took 129.789656ms for postStartSetup
	I0429 12:48:33.024034  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetConfigRaw
	I0429 12:48:33.024677  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:33.027566  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.028047  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.028090  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.028348  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:48:33.028616  870218 start.go:128] duration metric: took 24.540303032s to createHost
	I0429 12:48:33.028651  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:33.031168  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.031576  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.031604  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.031827  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:33.032054  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.032225  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.032390  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:33.032628  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:48:33.032815  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0429 12:48:33.032826  870218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:48:33.145647  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394913.118554750
	
	I0429 12:48:33.145686  870218 fix.go:216] guest clock: 1714394913.118554750
	I0429 12:48:33.145722  870218 fix.go:229] Guest: 2024-04-29 12:48:33.11855475 +0000 UTC Remote: 2024-04-29 12:48:33.028632996 +0000 UTC m=+82.403785948 (delta=89.921754ms)
	I0429 12:48:33.145751  870218 fix.go:200] guest clock delta is within tolerance: 89.921754ms
	I0429 12:48:33.145760  870218 start.go:83] releasing machines lock for "ha-212075-m02", held for 24.657592144s
	I0429 12:48:33.145790  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.146156  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:33.149182  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.149826  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.149921  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.152272  870218 out.go:177] * Found network options:
	I0429 12:48:33.153616  870218 out.go:177]   - NO_PROXY=192.168.39.97
	W0429 12:48:33.154781  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:48:33.154814  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.155526  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.155717  870218 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 12:48:33.155839  870218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:48:33.155888  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	W0429 12:48:33.155947  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:48:33.156040  870218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:48:33.156068  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 12:48:33.159109  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159175  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159607  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.159639  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159705  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:33.159739  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:33.159843  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:33.160130  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 12:48:33.160233  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.160299  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 12:48:33.160394  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:33.160462  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 12:48:33.160543  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:33.160577  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 12:48:33.408285  870218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:48:33.415387  870218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:48:33.415464  870218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:48:33.433502  870218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:48:33.433530  870218 start.go:494] detecting cgroup driver to use...
	I0429 12:48:33.433612  870218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:48:33.450404  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:48:33.466440  870218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:48:33.466506  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:48:33.482112  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:48:33.497775  870218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:48:33.618172  870218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:48:33.775113  870218 docker.go:233] disabling docker service ...
	I0429 12:48:33.775228  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:48:33.791054  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:48:33.805551  870218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:48:33.929373  870218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:48:34.046045  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:48:34.062693  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:48:34.082723  870218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:48:34.082828  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.095496  870218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:48:34.095585  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.107376  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.119146  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.130938  870218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:48:34.143454  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.155583  870218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.174540  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:48:34.188124  870218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:48:34.200241  870218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:48:34.200313  870218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:48:34.215862  870218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:48:34.226798  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:48:34.345854  870218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:48:34.493199  870218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:48:34.493282  870218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:48:34.498665  870218 start.go:562] Will wait 60s for crictl version
	I0429 12:48:34.498737  870218 ssh_runner.go:195] Run: which crictl
	I0429 12:48:34.502609  870218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:48:34.546513  870218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:48:34.546611  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:48:34.577592  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:48:34.610836  870218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:48:34.612460  870218 out.go:177]   - env NO_PROXY=192.168.39.97
	I0429 12:48:34.614047  870218 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 12:48:34.617088  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:34.617431  870218 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:48:23 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 12:48:34.617464  870218 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 12:48:34.617681  870218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:48:34.622595  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:48:34.637467  870218 mustload.go:65] Loading cluster: ha-212075
	I0429 12:48:34.637709  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:48:34.638083  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:34.638119  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:34.655335  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0429 12:48:34.655921  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:34.656505  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:34.656533  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:34.656949  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:34.657168  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:48:34.658862  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:34.659208  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:34.659241  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:34.676178  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0429 12:48:34.676652  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:34.677199  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:34.677225  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:34.677649  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:34.677907  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:34.678108  870218 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.36
	I0429 12:48:34.678120  870218 certs.go:194] generating shared ca certs ...
	I0429 12:48:34.678137  870218 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:34.678279  870218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:48:34.678315  870218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:48:34.678324  870218 certs.go:256] generating profile certs ...
	I0429 12:48:34.678399  870218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:48:34.678425  870218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885
	I0429 12:48:34.678440  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.36 192.168.39.254]
	I0429 12:48:34.805493  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885 ...
	I0429 12:48:34.805534  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885: {Name:mkd1806caee1a077a46115403308bba9c5b89af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:34.805721  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885 ...
	I0429 12:48:34.805735  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885: {Name:mk269ebb85b90f5fc58a4363fce8b015ee69584d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:48:34.805806  870218 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.4917e885 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:48:34.805933  870218 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.4917e885 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:48:34.806072  870218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:48:34.806091  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:48:34.806103  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:48:34.806117  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:48:34.806130  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:48:34.806140  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:48:34.806150  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:48:34.806159  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:48:34.806171  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:48:34.806219  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:48:34.806252  870218 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:48:34.806262  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:48:34.806284  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:48:34.806309  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:48:34.806339  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:48:34.806399  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:48:34.806444  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:34.806466  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:48:34.806484  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:48:34.806530  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:34.809733  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:34.810136  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:34.810161  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:34.810406  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:34.810632  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:34.810800  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:34.810968  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:34.887807  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 12:48:34.893360  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 12:48:34.905730  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 12:48:34.910714  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 12:48:34.925811  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 12:48:34.930751  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 12:48:34.942482  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 12:48:34.947167  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 12:48:34.959068  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 12:48:34.964374  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 12:48:34.977275  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 12:48:34.986440  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 12:48:35.002753  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:48:35.033535  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:48:35.060599  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:48:35.086939  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:48:35.113656  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 12:48:35.140419  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 12:48:35.167085  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:48:35.193420  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:48:35.218968  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:48:35.244325  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:48:35.269718  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:48:35.295519  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 12:48:35.314438  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 12:48:35.333492  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 12:48:35.351305  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 12:48:35.368872  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 12:48:35.386640  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 12:48:35.404506  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 12:48:35.422626  870218 ssh_runner.go:195] Run: openssl version
	I0429 12:48:35.429174  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:48:35.441867  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:48:35.446719  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:48:35.446818  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:48:35.454462  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:48:35.466704  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:48:35.479054  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:35.484682  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:35.484764  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:48:35.490502  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:48:35.502670  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:48:35.514732  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:48:35.519580  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:48:35.519657  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:48:35.525715  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:48:35.537965  870218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:48:35.542342  870218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:48:35.542400  870218 kubeadm.go:928] updating node {m02 192.168.39.36 8443 v1.30.0 crio true true} ...
	I0429 12:48:35.542490  870218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:48:35.542526  870218 kube-vip.go:111] generating kube-vip config ...
	I0429 12:48:35.542581  870218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:48:35.561058  870218 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:48:35.561143  870218 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:48:35.561210  870218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:48:35.574246  870218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 12:48:35.574321  870218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 12:48:35.585778  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 12:48:35.585815  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:48:35.585899  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:48:35.585897  870218 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0429 12:48:35.585912  870218 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0429 12:48:35.590788  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:48:35.590829  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 12:48:36.224127  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:48:36.224220  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:48:36.231982  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:48:36.232023  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 12:48:36.568380  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:48:36.585896  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:48:36.586003  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:48:36.590713  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:48:36.590749  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 12:48:37.056796  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 12:48:37.067649  870218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 12:48:37.088488  870218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:48:37.108016  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 12:48:37.127116  870218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:48:37.131646  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:48:37.146577  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:48:37.271454  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:48:37.288942  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:48:37.289340  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:48:37.289389  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:48:37.305131  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I0429 12:48:37.305662  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:48:37.306176  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:48:37.306198  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:48:37.306568  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:48:37.306815  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:48:37.306984  870218 start.go:316] joinCluster: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:48:37.307089  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 12:48:37.307111  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:48:37.310630  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:37.311149  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:48:37.311190  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:48:37.311387  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:48:37.311590  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:48:37.311753  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:48:37.311911  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:48:37.495514  870218 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:48:37.495571  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s07sl2.kup7dzd4wu3ttqwx --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m02 --control-plane --apiserver-advertise-address=192.168.39.36 --apiserver-bind-port=8443"
	I0429 12:49:02.049323  870218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s07sl2.kup7dzd4wu3ttqwx --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m02 --control-plane --apiserver-advertise-address=192.168.39.36 --apiserver-bind-port=8443": (24.553721774s)
	I0429 12:49:02.049397  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 12:49:02.558233  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-212075-m02 minikube.k8s.io/updated_at=2024_04_29T12_49_02_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=ha-212075 minikube.k8s.io/primary=false
	I0429 12:49:02.732186  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-212075-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 12:49:02.854666  870218 start.go:318] duration metric: took 25.547674937s to joinCluster
	I0429 12:49:02.854762  870218 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:49:02.856131  870218 out.go:177] * Verifying Kubernetes components...
	I0429 12:49:02.855106  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:02.857470  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:49:03.067090  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:49:03.085215  870218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:49:03.085546  870218 kapi.go:59] client config for ha-212075: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt", KeyFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key", CAFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 12:49:03.085629  870218 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0429 12:49:03.085948  870218 node_ready.go:35] waiting up to 6m0s for node "ha-212075-m02" to be "Ready" ...
	I0429 12:49:03.086041  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:03.086049  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:03.086057  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:03.086062  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:03.096961  870218 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 12:49:03.586849  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:03.586877  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:03.586887  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:03.586894  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:03.590813  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:04.086875  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:04.086906  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:04.086922  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:04.086929  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:04.118136  870218 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 12:49:04.586198  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:04.586239  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:04.586248  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:04.586253  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:04.590710  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:05.087118  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:05.087150  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:05.087158  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:05.087162  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:05.090865  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:05.091510  870218 node_ready.go:53] node "ha-212075-m02" has status "Ready":"False"
	I0429 12:49:05.586466  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:05.586495  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:05.586506  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:05.586511  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:05.591095  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:06.087044  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:06.087077  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:06.087093  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:06.087099  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:06.091377  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:06.587078  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:06.587107  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:06.587116  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:06.587121  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:06.592788  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:07.086787  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:07.086820  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:07.086831  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:07.086839  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:07.091631  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:07.092318  870218 node_ready.go:53] node "ha-212075-m02" has status "Ready":"False"
	I0429 12:49:07.586426  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:07.586461  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:07.586474  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:07.586481  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:07.593188  870218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:49:08.087222  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:08.087249  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.087260  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.087265  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.219961  870218 round_trippers.go:574] Response Status: 200 OK in 132 milliseconds
	I0429 12:49:08.220604  870218 node_ready.go:49] node "ha-212075-m02" has status "Ready":"True"
	I0429 12:49:08.220632  870218 node_ready.go:38] duration metric: took 5.134660973s for node "ha-212075-m02" to be "Ready" ...
	I0429 12:49:08.220646  870218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:49:08.220738  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:08.220753  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.220764  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.220773  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.229060  870218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:49:08.235147  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.235258  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2t8g
	I0429 12:49:08.235267  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.235275  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.235281  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.314925  870218 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0429 12:49:08.315803  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:08.315828  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.315840  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.315847  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.329212  870218 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 12:49:08.329863  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:08.329897  870218 pod_ready.go:81] duration metric: took 94.712953ms for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.329913  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.330038  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x299s
	I0429 12:49:08.330053  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.330064  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.330072  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.333533  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.334362  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:08.334381  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.334391  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.334398  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.338012  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.338971  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:08.338995  870218 pod_ready.go:81] duration metric: took 9.067885ms for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.339012  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.339105  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075
	I0429 12:49:08.339117  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.339128  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.339137  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.342835  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.343473  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:08.343496  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.343507  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.343516  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.349400  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:08.349955  870218 pod_ready.go:92] pod "etcd-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:08.349979  870218 pod_ready.go:81] duration metric: took 10.955166ms for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.349992  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:08.350072  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:08.350082  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.350093  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.350099  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.353467  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.354125  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:08.354146  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.354156  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.354163  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.357021  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:08.850289  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:08.850319  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.850331  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.850336  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.854284  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:08.854924  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:08.854948  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:08.854958  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:08.854963  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:08.858066  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:09.351005  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:09.351035  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.351057  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.351065  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.354889  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:09.356025  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:09.356044  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.356052  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.356057  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.359159  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:09.850927  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:09.850957  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.850970  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.850983  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.856731  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:09.857966  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:09.857984  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:09.857992  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:09.857998  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:09.860636  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:10.350922  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:10.350956  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.350966  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.350971  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.355448  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:10.356559  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:10.356578  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.356587  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.356591  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.360394  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:10.361486  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:10.850899  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:10.850929  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.850944  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.850949  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.855382  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:10.856870  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:10.856902  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:10.856914  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:10.856920  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:10.861517  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:11.350799  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:11.350827  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.350834  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.350838  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.354605  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:11.355335  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:11.355353  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.355378  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.355383  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.359577  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:11.851128  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:11.851157  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.851168  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.851173  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.855182  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:11.856109  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:11.856128  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:11.856138  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:11.856143  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:11.859010  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:12.350392  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:12.350424  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.350432  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.350436  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.354740  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:12.355746  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:12.355765  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.355773  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.355777  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.358545  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:12.850371  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:12.850402  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.850412  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.850418  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.854752  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:12.855683  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:12.855700  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:12.855711  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:12.855718  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:12.858834  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:12.859529  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:13.351114  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:13.351145  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.351153  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.351156  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.355082  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:13.356231  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:13.356252  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.356259  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.356264  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.359839  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:13.850437  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:13.850465  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.850473  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.850477  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.855157  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:13.855992  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:13.856011  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:13.856018  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:13.856022  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:13.859036  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:14.351293  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:14.351330  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.351340  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.351345  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.356455  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:14.357250  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:14.357269  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.357277  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.357282  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.361121  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:14.850961  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:14.851003  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.851016  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.851022  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.855394  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:14.856176  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:14.856195  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:14.856202  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:14.856205  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:14.859738  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:14.860328  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:15.350798  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:15.350830  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.350841  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.350845  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.355817  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:15.357189  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:15.357221  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.357230  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.357246  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.361330  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:15.850942  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:15.850977  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.850986  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.850989  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.854870  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:15.855657  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:15.855678  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:15.855686  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:15.855690  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:15.859126  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:16.351154  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:16.351183  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.351191  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.351196  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.359437  870218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:49:16.360301  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:16.360324  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.360336  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.360342  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.363465  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:16.850422  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:16.850456  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.850466  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.850471  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.855565  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:16.856378  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:16.856396  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:16.856404  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:16.856409  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:16.860272  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:16.860861  870218 pod_ready.go:102] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 12:49:17.350900  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:17.350927  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.350935  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.350938  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.355909  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:17.356733  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:17.356756  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.356767  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.356773  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.360272  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:17.850305  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:17.850332  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.850340  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.850345  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.854785  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:17.855453  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:17.855469  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:17.855477  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:17.855482  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:17.858541  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.350427  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:18.350455  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.350468  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.350474  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.354383  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.354996  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.355013  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.355021  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.355025  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.358793  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.851129  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:49:18.851157  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.851166  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.851171  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.855430  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:18.856095  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.856112  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.856120  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.856123  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.859800  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.860384  870218 pod_ready.go:92] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.860405  870218 pod_ready.go:81] duration metric: took 10.5104068s for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.860424  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.860487  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075
	I0429 12:49:18.860495  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.860502  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.860507  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.863876  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.864907  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:18.864922  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.864930  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.864933  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.868515  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.869138  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.869156  870218 pod_ready.go:81] duration metric: took 8.723811ms for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.869166  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.869238  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075-m02
	I0429 12:49:18.869246  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.869254  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.869261  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.872567  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.873514  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.873527  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.873535  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.873543  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.876640  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.877214  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.877232  870218 pod_ready.go:81] duration metric: took 8.058402ms for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.877242  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.877307  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075
	I0429 12:49:18.877316  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.877322  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.877328  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.880468  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.881203  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:18.881224  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.881235  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.881241  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.884107  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:18.885148  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.885169  870218 pod_ready.go:81] duration metric: took 7.919576ms for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.885180  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.885305  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m02
	I0429 12:49:18.885316  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.885323  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.885328  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.888193  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:49:18.888830  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:18.888845  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:18.888856  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:18.888861  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:18.892058  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:18.892610  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:18.892627  870218 pod_ready.go:81] duration metric: took 7.442078ms for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:18.892638  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.052102  870218 request.go:629] Waited for 159.379221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:49:19.052177  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:49:19.052184  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.052194  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.052199  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.061885  870218 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:49:19.252140  870218 request.go:629] Waited for 189.369987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:19.252224  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:19.252232  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.252243  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.252261  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.255881  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:19.256791  870218 pod_ready.go:92] pod "kube-proxy-ncdsk" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:19.256814  870218 pod_ready.go:81] duration metric: took 364.169187ms for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.256826  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.451933  870218 request.go:629] Waited for 195.014453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:49:19.452030  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:49:19.452042  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.452054  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.452062  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.455910  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:19.652261  870218 request.go:629] Waited for 195.417524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:19.652340  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:19.652348  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.652359  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.652364  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.656950  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:19.657658  870218 pod_ready.go:92] pod "kube-proxy-sfmhh" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:19.657680  870218 pod_ready.go:81] duration metric: took 400.848571ms for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.657691  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:19.851979  870218 request.go:629] Waited for 194.202023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:49:19.852092  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:49:19.852100  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:19.852111  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:19.852117  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:19.856054  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:20.051273  870218 request.go:629] Waited for 194.323385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:20.051405  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:49:20.051419  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.051427  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.051431  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.055030  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:20.055579  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:20.055603  870218 pod_ready.go:81] duration metric: took 397.905703ms for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:20.055614  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:20.251711  870218 request.go:629] Waited for 196.011102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:49:20.251809  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:49:20.251817  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.251838  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.251857  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.256067  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:20.451164  870218 request.go:629] Waited for 194.30553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:20.451272  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:49:20.451280  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.451291  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.451297  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.454603  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:49:20.455129  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:49:20.455160  870218 pod_ready.go:81] duration metric: took 399.537636ms for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:49:20.455176  870218 pod_ready.go:38] duration metric: took 12.234512578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:49:20.455196  870218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:49:20.455270  870218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:49:20.472016  870218 api_server.go:72] duration metric: took 17.617201161s to wait for apiserver process to appear ...
	I0429 12:49:20.472049  870218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:49:20.472071  870218 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0429 12:49:20.478128  870218 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0429 12:49:20.478206  870218 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0429 12:49:20.478214  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.478222  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.478229  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.479205  870218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0429 12:49:20.479326  870218 api_server.go:141] control plane version: v1.30.0
	I0429 12:49:20.479348  870218 api_server.go:131] duration metric: took 7.292177ms to wait for apiserver health ...
	I0429 12:49:20.479376  870218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:49:20.651833  870218 request.go:629] Waited for 172.3703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:20.651916  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:20.651921  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.651930  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.651933  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.658972  870218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:49:20.663967  870218 system_pods.go:59] 17 kube-system pods found
	I0429 12:49:20.664028  870218 system_pods.go:61] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:49:20.664035  870218 system_pods.go:61] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:49:20.664039  870218 system_pods.go:61] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:49:20.664043  870218 system_pods.go:61] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:49:20.664046  870218 system_pods.go:61] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:49:20.664049  870218 system_pods.go:61] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:49:20.664052  870218 system_pods.go:61] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:49:20.664058  870218 system_pods.go:61] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:49:20.664061  870218 system_pods.go:61] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:49:20.664066  870218 system_pods.go:61] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:49:20.664069  870218 system_pods.go:61] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:49:20.664074  870218 system_pods.go:61] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:49:20.664078  870218 system_pods.go:61] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:49:20.664082  870218 system_pods.go:61] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:49:20.664087  870218 system_pods.go:61] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:49:20.664090  870218 system_pods.go:61] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:49:20.664092  870218 system_pods.go:61] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:49:20.664099  870218 system_pods.go:74] duration metric: took 184.712298ms to wait for pod list to return data ...
	I0429 12:49:20.664110  870218 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:49:20.851865  870218 request.go:629] Waited for 187.634894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:49:20.851947  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:49:20.851952  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:20.851960  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:20.851965  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:20.856481  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:49:20.856703  870218 default_sa.go:45] found service account: "default"
	I0429 12:49:20.856718  870218 default_sa.go:55] duration metric: took 192.601184ms for default service account to be created ...
	I0429 12:49:20.856727  870218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:49:21.052256  870218 request.go:629] Waited for 195.417866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:21.052333  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:49:21.052351  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:21.052361  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:21.052366  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:21.057947  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:49:21.062642  870218 system_pods.go:86] 17 kube-system pods found
	I0429 12:49:21.062687  870218 system_pods.go:89] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:49:21.062696  870218 system_pods.go:89] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:49:21.062703  870218 system_pods.go:89] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:49:21.062708  870218 system_pods.go:89] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:49:21.062714  870218 system_pods.go:89] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:49:21.062720  870218 system_pods.go:89] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:49:21.062727  870218 system_pods.go:89] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:49:21.062733  870218 system_pods.go:89] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:49:21.062739  870218 system_pods.go:89] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:49:21.062746  870218 system_pods.go:89] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:49:21.062757  870218 system_pods.go:89] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:49:21.062765  870218 system_pods.go:89] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:49:21.062774  870218 system_pods.go:89] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:49:21.062782  870218 system_pods.go:89] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:49:21.062792  870218 system_pods.go:89] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:49:21.062800  870218 system_pods.go:89] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:49:21.062806  870218 system_pods.go:89] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:49:21.062820  870218 system_pods.go:126] duration metric: took 206.083067ms to wait for k8s-apps to be running ...
	I0429 12:49:21.062833  870218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:49:21.062894  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:49:21.080250  870218 system_svc.go:56] duration metric: took 17.405204ms WaitForService to wait for kubelet
	I0429 12:49:21.080292  870218 kubeadm.go:576] duration metric: took 18.225480527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:49:21.080313  870218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:49:21.251755  870218 request.go:629] Waited for 171.363431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0429 12:49:21.251820  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0429 12:49:21.251825  870218 round_trippers.go:469] Request Headers:
	I0429 12:49:21.251832  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:49:21.251837  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:49:21.258283  870218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:49:21.259370  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:49:21.259409  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:49:21.259433  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:49:21.259439  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:49:21.259446  870218 node_conditions.go:105] duration metric: took 179.126951ms to run NodePressure ...
	I0429 12:49:21.259466  870218 start.go:240] waiting for startup goroutines ...
	I0429 12:49:21.259550  870218 start.go:254] writing updated cluster config ...
	I0429 12:49:21.261846  870218 out.go:177] 
	I0429 12:49:21.263452  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:21.263575  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:49:21.265353  870218 out.go:177] * Starting "ha-212075-m03" control-plane node in "ha-212075" cluster
	I0429 12:49:21.266472  870218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:49:21.266506  870218 cache.go:56] Caching tarball of preloaded images
	I0429 12:49:21.266623  870218 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:49:21.266635  870218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:49:21.266802  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:49:21.267040  870218 start.go:360] acquireMachinesLock for ha-212075-m03: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:49:21.267097  870218 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "ha-212075-m03"
	I0429 12:49:21.267118  870218 start.go:93] Provisioning new machine with config: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:49:21.267234  870218 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0429 12:49:21.268814  870218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:49:21.268990  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:21.269033  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:21.286412  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0429 12:49:21.286903  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:21.287420  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:21.287439  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:21.287775  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:21.288007  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:21.288192  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:21.288372  870218 start.go:159] libmachine.API.Create for "ha-212075" (driver="kvm2")
	I0429 12:49:21.288401  870218 client.go:168] LocalClient.Create starting
	I0429 12:49:21.288434  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 12:49:21.288469  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:49:21.288485  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:49:21.288542  870218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 12:49:21.288560  870218 main.go:141] libmachine: Decoding PEM data...
	I0429 12:49:21.288570  870218 main.go:141] libmachine: Parsing certificate...
	I0429 12:49:21.288584  870218 main.go:141] libmachine: Running pre-create checks...
	I0429 12:49:21.288592  870218 main.go:141] libmachine: (ha-212075-m03) Calling .PreCreateCheck
	I0429 12:49:21.288811  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetConfigRaw
	I0429 12:49:21.289218  870218 main.go:141] libmachine: Creating machine...
	I0429 12:49:21.289234  870218 main.go:141] libmachine: (ha-212075-m03) Calling .Create
	I0429 12:49:21.289387  870218 main.go:141] libmachine: (ha-212075-m03) Creating KVM machine...
	I0429 12:49:21.291003  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found existing default KVM network
	I0429 12:49:21.291207  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found existing private KVM network mk-ha-212075
	I0429 12:49:21.291323  870218 main.go:141] libmachine: (ha-212075-m03) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03 ...
	I0429 12:49:21.291379  870218 main.go:141] libmachine: (ha-212075-m03) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:49:21.291474  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.291312  871030 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:49:21.291566  870218 main.go:141] libmachine: (ha-212075-m03) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:49:21.553727  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.553607  871030 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa...
	I0429 12:49:21.655477  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.655312  871030 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/ha-212075-m03.rawdisk...
	I0429 12:49:21.655512  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Writing magic tar header
	I0429 12:49:21.655527  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Writing SSH key tar header
	I0429 12:49:21.655537  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:21.655481  871030 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03 ...
	I0429 12:49:21.655661  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03
	I0429 12:49:21.655697  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03 (perms=drwx------)
	I0429 12:49:21.655710  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 12:49:21.655734  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:49:21.655747  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 12:49:21.655763  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:49:21.655775  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:49:21.655790  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Checking permissions on dir: /home
	I0429 12:49:21.655852  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Skipping /home - not owner
	I0429 12:49:21.655872  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:49:21.655886  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 12:49:21.655898  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 12:49:21.655912  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:49:21.655924  870218 main.go:141] libmachine: (ha-212075-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:49:21.655938  870218 main.go:141] libmachine: (ha-212075-m03) Creating domain...
	I0429 12:49:21.656952  870218 main.go:141] libmachine: (ha-212075-m03) define libvirt domain using xml: 
	I0429 12:49:21.656983  870218 main.go:141] libmachine: (ha-212075-m03) <domain type='kvm'>
	I0429 12:49:21.656994  870218 main.go:141] libmachine: (ha-212075-m03)   <name>ha-212075-m03</name>
	I0429 12:49:21.657006  870218 main.go:141] libmachine: (ha-212075-m03)   <memory unit='MiB'>2200</memory>
	I0429 12:49:21.657019  870218 main.go:141] libmachine: (ha-212075-m03)   <vcpu>2</vcpu>
	I0429 12:49:21.657028  870218 main.go:141] libmachine: (ha-212075-m03)   <features>
	I0429 12:49:21.657034  870218 main.go:141] libmachine: (ha-212075-m03)     <acpi/>
	I0429 12:49:21.657039  870218 main.go:141] libmachine: (ha-212075-m03)     <apic/>
	I0429 12:49:21.657044  870218 main.go:141] libmachine: (ha-212075-m03)     <pae/>
	I0429 12:49:21.657051  870218 main.go:141] libmachine: (ha-212075-m03)     
	I0429 12:49:21.657056  870218 main.go:141] libmachine: (ha-212075-m03)   </features>
	I0429 12:49:21.657061  870218 main.go:141] libmachine: (ha-212075-m03)   <cpu mode='host-passthrough'>
	I0429 12:49:21.657067  870218 main.go:141] libmachine: (ha-212075-m03)   
	I0429 12:49:21.657074  870218 main.go:141] libmachine: (ha-212075-m03)   </cpu>
	I0429 12:49:21.657079  870218 main.go:141] libmachine: (ha-212075-m03)   <os>
	I0429 12:49:21.657089  870218 main.go:141] libmachine: (ha-212075-m03)     <type>hvm</type>
	I0429 12:49:21.657129  870218 main.go:141] libmachine: (ha-212075-m03)     <boot dev='cdrom'/>
	I0429 12:49:21.657156  870218 main.go:141] libmachine: (ha-212075-m03)     <boot dev='hd'/>
	I0429 12:49:21.657169  870218 main.go:141] libmachine: (ha-212075-m03)     <bootmenu enable='no'/>
	I0429 12:49:21.657180  870218 main.go:141] libmachine: (ha-212075-m03)   </os>
	I0429 12:49:21.657189  870218 main.go:141] libmachine: (ha-212075-m03)   <devices>
	I0429 12:49:21.657203  870218 main.go:141] libmachine: (ha-212075-m03)     <disk type='file' device='cdrom'>
	I0429 12:49:21.657216  870218 main.go:141] libmachine: (ha-212075-m03)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/boot2docker.iso'/>
	I0429 12:49:21.657227  870218 main.go:141] libmachine: (ha-212075-m03)       <target dev='hdc' bus='scsi'/>
	I0429 12:49:21.657257  870218 main.go:141] libmachine: (ha-212075-m03)       <readonly/>
	I0429 12:49:21.657282  870218 main.go:141] libmachine: (ha-212075-m03)     </disk>
	I0429 12:49:21.657296  870218 main.go:141] libmachine: (ha-212075-m03)     <disk type='file' device='disk'>
	I0429 12:49:21.657306  870218 main.go:141] libmachine: (ha-212075-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:49:21.657320  870218 main.go:141] libmachine: (ha-212075-m03)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/ha-212075-m03.rawdisk'/>
	I0429 12:49:21.657333  870218 main.go:141] libmachine: (ha-212075-m03)       <target dev='hda' bus='virtio'/>
	I0429 12:49:21.657343  870218 main.go:141] libmachine: (ha-212075-m03)     </disk>
	I0429 12:49:21.657358  870218 main.go:141] libmachine: (ha-212075-m03)     <interface type='network'>
	I0429 12:49:21.657370  870218 main.go:141] libmachine: (ha-212075-m03)       <source network='mk-ha-212075'/>
	I0429 12:49:21.657382  870218 main.go:141] libmachine: (ha-212075-m03)       <model type='virtio'/>
	I0429 12:49:21.657395  870218 main.go:141] libmachine: (ha-212075-m03)     </interface>
	I0429 12:49:21.657403  870218 main.go:141] libmachine: (ha-212075-m03)     <interface type='network'>
	I0429 12:49:21.657410  870218 main.go:141] libmachine: (ha-212075-m03)       <source network='default'/>
	I0429 12:49:21.657425  870218 main.go:141] libmachine: (ha-212075-m03)       <model type='virtio'/>
	I0429 12:49:21.657438  870218 main.go:141] libmachine: (ha-212075-m03)     </interface>
	I0429 12:49:21.657446  870218 main.go:141] libmachine: (ha-212075-m03)     <serial type='pty'>
	I0429 12:49:21.657458  870218 main.go:141] libmachine: (ha-212075-m03)       <target port='0'/>
	I0429 12:49:21.657468  870218 main.go:141] libmachine: (ha-212075-m03)     </serial>
	I0429 12:49:21.657477  870218 main.go:141] libmachine: (ha-212075-m03)     <console type='pty'>
	I0429 12:49:21.657492  870218 main.go:141] libmachine: (ha-212075-m03)       <target type='serial' port='0'/>
	I0429 12:49:21.657504  870218 main.go:141] libmachine: (ha-212075-m03)     </console>
	I0429 12:49:21.657514  870218 main.go:141] libmachine: (ha-212075-m03)     <rng model='virtio'>
	I0429 12:49:21.657526  870218 main.go:141] libmachine: (ha-212075-m03)       <backend model='random'>/dev/random</backend>
	I0429 12:49:21.657536  870218 main.go:141] libmachine: (ha-212075-m03)     </rng>
	I0429 12:49:21.657548  870218 main.go:141] libmachine: (ha-212075-m03)     
	I0429 12:49:21.657555  870218 main.go:141] libmachine: (ha-212075-m03)     
	I0429 12:49:21.657562  870218 main.go:141] libmachine: (ha-212075-m03)   </devices>
	I0429 12:49:21.657571  870218 main.go:141] libmachine: (ha-212075-m03) </domain>
	I0429 12:49:21.657592  870218 main.go:141] libmachine: (ha-212075-m03) 
	I0429 12:49:21.666856  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:f8:63:70 in network default
	I0429 12:49:21.667717  870218 main.go:141] libmachine: (ha-212075-m03) Ensuring networks are active...
	I0429 12:49:21.667743  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:21.668658  870218 main.go:141] libmachine: (ha-212075-m03) Ensuring network default is active
	I0429 12:49:21.669085  870218 main.go:141] libmachine: (ha-212075-m03) Ensuring network mk-ha-212075 is active
	I0429 12:49:21.669490  870218 main.go:141] libmachine: (ha-212075-m03) Getting domain xml...
	I0429 12:49:21.670407  870218 main.go:141] libmachine: (ha-212075-m03) Creating domain...
	I0429 12:49:22.960796  870218 main.go:141] libmachine: (ha-212075-m03) Waiting to get IP...
	I0429 12:49:22.961597  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:22.962057  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:22.962099  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:22.962046  871030 retry.go:31] will retry after 275.195421ms: waiting for machine to come up
	I0429 12:49:23.238662  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:23.239199  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:23.239240  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:23.239147  871030 retry.go:31] will retry after 254.361022ms: waiting for machine to come up
	I0429 12:49:23.495797  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:23.496358  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:23.496391  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:23.496305  871030 retry.go:31] will retry after 399.111276ms: waiting for machine to come up
	I0429 12:49:23.897726  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:23.898280  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:23.898315  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:23.898236  871030 retry.go:31] will retry after 423.835443ms: waiting for machine to come up
	I0429 12:49:24.324377  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:24.324945  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:24.324974  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:24.324899  871030 retry.go:31] will retry after 676.971457ms: waiting for machine to come up
	I0429 12:49:25.003929  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:25.004292  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:25.004315  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:25.004262  871030 retry.go:31] will retry after 923.473252ms: waiting for machine to come up
	I0429 12:49:25.928825  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:25.929398  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:25.929436  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:25.929332  871030 retry.go:31] will retry after 855.800309ms: waiting for machine to come up
	I0429 12:49:26.786759  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:26.787218  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:26.787248  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:26.787181  871030 retry.go:31] will retry after 999.873188ms: waiting for machine to come up
	I0429 12:49:27.788564  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:27.789010  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:27.789035  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:27.788950  871030 retry.go:31] will retry after 1.830294576s: waiting for machine to come up
	I0429 12:49:29.622339  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:29.622964  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:29.623001  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:29.622895  871030 retry.go:31] will retry after 2.277621565s: waiting for machine to come up
	I0429 12:49:31.901933  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:31.902475  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:31.902524  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:31.902398  871030 retry.go:31] will retry after 2.203385625s: waiting for machine to come up
	I0429 12:49:34.108550  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:34.108982  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:34.109014  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:34.108936  871030 retry.go:31] will retry after 3.624223076s: waiting for machine to come up
	I0429 12:49:37.735007  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:37.735616  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find current IP address of domain ha-212075-m03 in network mk-ha-212075
	I0429 12:49:37.735646  870218 main.go:141] libmachine: (ha-212075-m03) DBG | I0429 12:49:37.735568  871030 retry.go:31] will retry after 4.166668795s: waiting for machine to come up
	I0429 12:49:41.903602  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:41.904123  870218 main.go:141] libmachine: (ha-212075-m03) Found IP for machine: 192.168.39.109
	I0429 12:49:41.904142  870218 main.go:141] libmachine: (ha-212075-m03) Reserving static IP address...
	I0429 12:49:41.904152  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has current primary IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:41.904544  870218 main.go:141] libmachine: (ha-212075-m03) DBG | unable to find host DHCP lease matching {name: "ha-212075-m03", mac: "52:54:00:1c:04:a1", ip: "192.168.39.109"} in network mk-ha-212075
	I0429 12:49:41.999071  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Getting to WaitForSSH function...
	I0429 12:49:41.999113  870218 main.go:141] libmachine: (ha-212075-m03) Reserved static IP address: 192.168.39.109
	I0429 12:49:41.999129  870218 main.go:141] libmachine: (ha-212075-m03) Waiting for SSH to be available...
	I0429 12:49:42.001885  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.002602  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.002632  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.002653  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Using SSH client type: external
	I0429 12:49:42.002666  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa (-rw-------)
	I0429 12:49:42.002696  870218 main.go:141] libmachine: (ha-212075-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:49:42.002710  870218 main.go:141] libmachine: (ha-212075-m03) DBG | About to run SSH command:
	I0429 12:49:42.002748  870218 main.go:141] libmachine: (ha-212075-m03) DBG | exit 0
	I0429 12:49:42.132018  870218 main.go:141] libmachine: (ha-212075-m03) DBG | SSH cmd err, output: <nil>: 
	I0429 12:49:42.132286  870218 main.go:141] libmachine: (ha-212075-m03) KVM machine creation complete!
	I0429 12:49:42.132639  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetConfigRaw
	I0429 12:49:42.133225  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:42.133438  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:42.133643  870218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:49:42.133665  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:49:42.135130  870218 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:49:42.135148  870218 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:49:42.135157  870218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:49:42.135168  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.137902  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.138260  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.138291  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.138462  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.138672  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.138886  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.139066  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.139259  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.139548  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.139562  870218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:49:42.251331  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:49:42.251394  870218 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:49:42.251407  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.255739  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.256325  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.256371  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.256822  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.257086  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.257291  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.257464  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.257701  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.257939  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.257959  870218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:49:42.372811  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:49:42.372885  870218 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:49:42.372892  870218 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:49:42.372902  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:42.373263  870218 buildroot.go:166] provisioning hostname "ha-212075-m03"
	I0429 12:49:42.373296  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:42.373540  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.376574  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.377111  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.377148  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.377277  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.377493  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.377667  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.377828  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.378048  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.378311  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.378330  870218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075-m03 && echo "ha-212075-m03" | sudo tee /etc/hostname
	I0429 12:49:42.504636  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075-m03
	
	I0429 12:49:42.504679  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.507608  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.508004  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.508030  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.508303  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.508548  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.508754  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.508886  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.509117  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.509339  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.509357  870218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:49:42.626792  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:49:42.626829  870218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:49:42.626849  870218 buildroot.go:174] setting up certificates
	I0429 12:49:42.626863  870218 provision.go:84] configureAuth start
	I0429 12:49:42.626876  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetMachineName
	I0429 12:49:42.627259  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:42.630150  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.630519  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.630552  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.630703  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.633425  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.633770  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.633798  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.633925  870218 provision.go:143] copyHostCerts
	I0429 12:49:42.633964  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:49:42.634010  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:49:42.634023  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:49:42.634119  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:49:42.634237  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:49:42.634263  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:49:42.634273  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:49:42.634318  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:49:42.634403  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:49:42.634426  870218 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:49:42.634434  870218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:49:42.634467  870218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:49:42.634540  870218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075-m03 san=[127.0.0.1 192.168.39.109 ha-212075-m03 localhost minikube]
	I0429 12:49:42.737197  870218 provision.go:177] copyRemoteCerts
	I0429 12:49:42.737263  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:49:42.737297  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.740003  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.740382  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.740442  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.740606  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.740806  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.740978  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.741155  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:42.827122  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:49:42.827206  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:49:42.855209  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:49:42.855317  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:49:42.883770  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:49:42.883851  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:49:42.911410  870218 provision.go:87] duration metric: took 284.528347ms to configureAuth
	I0429 12:49:42.911452  870218 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:49:42.911733  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:42.911834  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:42.914793  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.915175  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:42.915208  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:42.915408  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:42.915653  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.915839  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:42.915991  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:42.916165  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:42.916385  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:42.916411  870218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:49:43.217344  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:49:43.217385  870218 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:49:43.217396  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetURL
	I0429 12:49:43.219000  870218 main.go:141] libmachine: (ha-212075-m03) DBG | Using libvirt version 6000000
	I0429 12:49:43.221697  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.222061  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.222087  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.222270  870218 main.go:141] libmachine: Docker is up and running!
	I0429 12:49:43.222283  870218 main.go:141] libmachine: Reticulating splines...
	I0429 12:49:43.222291  870218 client.go:171] duration metric: took 21.933879944s to LocalClient.Create
	I0429 12:49:43.222314  870218 start.go:167] duration metric: took 21.933944364s to libmachine.API.Create "ha-212075"
	I0429 12:49:43.222324  870218 start.go:293] postStartSetup for "ha-212075-m03" (driver="kvm2")
	I0429 12:49:43.222335  870218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:49:43.222370  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.222650  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:49:43.222690  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:43.225352  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.225819  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.225855  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.226068  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.226288  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.226485  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.226624  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:43.316706  870218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:49:43.321843  870218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:49:43.321882  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:49:43.321994  870218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:49:43.322091  870218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:49:43.322104  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:49:43.322368  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:49:43.334078  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:49:43.361992  870218 start.go:296] duration metric: took 139.649645ms for postStartSetup
	I0429 12:49:43.362063  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetConfigRaw
	I0429 12:49:43.362790  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:43.365832  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.366363  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.366399  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.366832  870218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:49:43.367146  870218 start.go:128] duration metric: took 22.099896004s to createHost
	I0429 12:49:43.367183  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:43.369765  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.370219  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.370248  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.370419  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.370666  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.370874  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.371071  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.371236  870218 main.go:141] libmachine: Using SSH client type: native
	I0429 12:49:43.371460  870218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0429 12:49:43.371478  870218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:49:43.485175  870218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394983.453178524
	
	I0429 12:49:43.485209  870218 fix.go:216] guest clock: 1714394983.453178524
	I0429 12:49:43.485228  870218 fix.go:229] Guest: 2024-04-29 12:49:43.453178524 +0000 UTC Remote: 2024-04-29 12:49:43.367166051 +0000 UTC m=+152.742319003 (delta=86.012473ms)
	I0429 12:49:43.485253  870218 fix.go:200] guest clock delta is within tolerance: 86.012473ms
	I0429 12:49:43.485260  870218 start.go:83] releasing machines lock for "ha-212075-m03", held for 22.218152522s
	I0429 12:49:43.485292  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.485628  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:43.488595  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.489047  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.489074  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.491763  870218 out.go:177] * Found network options:
	I0429 12:49:43.493406  870218 out.go:177]   - NO_PROXY=192.168.39.97,192.168.39.36
	W0429 12:49:43.494652  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 12:49:43.494677  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:49:43.494698  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.495627  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.495861  870218 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:49:43.496005  870218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:49:43.496064  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	W0429 12:49:43.496177  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 12:49:43.496205  870218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:49:43.496282  870218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:49:43.496308  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:49:43.499425  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.499745  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.499847  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.499903  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.500060  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.500281  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.500316  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:43.500332  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:43.500490  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:49:43.500615  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.500728  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:49:43.500799  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:43.500895  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:49:43.501079  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:49:43.745296  870218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:49:43.753042  870218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:49:43.753146  870218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:49:43.771662  870218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:49:43.771702  870218 start.go:494] detecting cgroup driver to use...
	I0429 12:49:43.771785  870218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:49:43.789253  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:49:43.805629  870218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:49:43.805716  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:49:43.823411  870218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:49:43.839119  870218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:49:43.965205  870218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:49:44.144532  870218 docker.go:233] disabling docker service ...
	I0429 12:49:44.144615  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:49:44.161598  870218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:49:44.176924  870218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:49:44.318518  870218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:49:44.444464  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:49:44.460146  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:49:44.482406  870218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:49:44.482480  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.495415  870218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:49:44.495495  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.507625  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.520065  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.532758  870218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:49:44.545179  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.557185  870218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.577205  870218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:49:44.590527  870218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:49:44.601541  870218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:49:44.601614  870218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:49:44.618752  870218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:49:44.630649  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:49:44.760043  870218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:49:44.908104  870218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:49:44.908203  870218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:49:44.913590  870218 start.go:562] Will wait 60s for crictl version
	I0429 12:49:44.913671  870218 ssh_runner.go:195] Run: which crictl
	I0429 12:49:44.917832  870218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:49:44.967004  870218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:49:44.967123  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:49:45.000292  870218 ssh_runner.go:195] Run: crio --version
	I0429 12:49:45.033598  870218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:49:45.034927  870218 out.go:177]   - env NO_PROXY=192.168.39.97
	I0429 12:49:45.036448  870218 out.go:177]   - env NO_PROXY=192.168.39.97,192.168.39.36
	I0429 12:49:45.037641  870218 main.go:141] libmachine: (ha-212075-m03) Calling .GetIP
	I0429 12:49:45.040460  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:45.040872  870218 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:49:45.040897  870218 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:49:45.041102  870218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:49:45.045938  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:49:45.060030  870218 mustload.go:65] Loading cluster: ha-212075
	I0429 12:49:45.060296  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:49:45.060651  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:45.060702  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:45.076464  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0429 12:49:45.076966  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:45.077478  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:45.077508  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:45.077859  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:45.078069  870218 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:49:45.079901  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:49:45.080243  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:45.080285  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:45.096699  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0429 12:49:45.097237  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:45.097836  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:45.097862  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:45.098219  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:45.098405  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:49:45.098548  870218 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.109
	I0429 12:49:45.098562  870218 certs.go:194] generating shared ca certs ...
	I0429 12:49:45.098585  870218 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:49:45.098756  870218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:49:45.098808  870218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:49:45.098823  870218 certs.go:256] generating profile certs ...
	I0429 12:49:45.098924  870218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:49:45.098980  870218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a
	I0429 12:49:45.099003  870218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.36 192.168.39.109 192.168.39.254]
	I0429 12:49:45.305371  870218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a ...
	I0429 12:49:45.305425  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a: {Name:mk17ce06665377b1ef8d805c47fa76e8dc7207f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:49:45.305633  870218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a ...
	I0429 12:49:45.305648  870218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a: {Name:mk93b7c74bfe26fde2277c8d3d88ed9da0ad319b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:49:45.305724  870218 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.ed1ead6a -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:49:45.305871  870218 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.ed1ead6a -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:49:45.306065  870218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:49:45.306084  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:49:45.306097  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:49:45.306107  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:49:45.306122  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:49:45.306135  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:49:45.306148  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:49:45.306159  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:49:45.306177  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:49:45.306231  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:49:45.306261  870218 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:49:45.306271  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:49:45.306291  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:49:45.306311  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:49:45.306339  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:49:45.306382  870218 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:49:45.306422  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:49:45.306443  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:49:45.306461  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:45.306505  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:49:45.310065  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:45.310591  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:49:45.310624  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:45.310800  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:49:45.311020  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:49:45.311175  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:49:45.311404  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:49:45.391809  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 12:49:45.398724  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 12:49:45.413103  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 12:49:45.418467  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 12:49:45.432802  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 12:49:45.440754  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 12:49:45.455596  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 12:49:45.461211  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 12:49:45.474952  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 12:49:45.480353  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 12:49:45.494613  870218 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 12:49:45.500103  870218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 12:49:45.514344  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:49:45.543413  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:49:45.571376  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:49:45.599504  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:49:45.626755  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 12:49:45.654437  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:49:45.683324  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:49:45.711211  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:49:45.741014  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:49:45.771670  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:49:45.799792  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:49:45.827179  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 12:49:45.846410  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 12:49:45.865742  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 12:49:45.884880  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 12:49:45.906579  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 12:49:45.934587  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 12:49:45.954505  870218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 12:49:45.975997  870218 ssh_runner.go:195] Run: openssl version
	I0429 12:49:45.982992  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:49:45.997527  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:49:46.003152  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:49:46.003236  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:49:46.010012  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:49:46.023112  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:49:46.036215  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:49:46.041157  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:49:46.041267  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:49:46.047620  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:49:46.060305  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:49:46.073926  870218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:46.079569  870218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:46.079679  870218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:49:46.086524  870218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:49:46.099843  870218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:49:46.105249  870218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:49:46.105323  870218 kubeadm.go:928] updating node {m03 192.168.39.109 8443 v1.30.0 crio true true} ...
	I0429 12:49:46.105422  870218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:49:46.105448  870218 kube-vip.go:111] generating kube-vip config ...
	I0429 12:49:46.105491  870218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:49:46.128538  870218 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:49:46.128670  870218 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:49:46.128781  870218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:49:46.141116  870218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 12:49:46.141207  870218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 12:49:46.154231  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 12:49:46.154250  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 12:49:46.154278  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:49:46.154231  870218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 12:49:46.154301  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:49:46.154310  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:49:46.154344  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:49:46.154399  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:49:46.167622  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:49:46.167674  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 12:49:46.183740  870218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:49:46.183810  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:49:46.183852  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 12:49:46.183861  870218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:49:46.248362  870218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:49:46.248422  870218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 12:49:47.245635  870218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 12:49:47.257815  870218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 12:49:47.279835  870218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:49:47.302241  870218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 12:49:47.321197  870218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:49:47.325960  870218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:49:47.340454  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:49:47.476126  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:49:47.495980  870218 host.go:66] Checking if "ha-212075" exists ...
	I0429 12:49:47.496376  870218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:49:47.496420  870218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:49:47.513164  870218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0429 12:49:47.513691  870218 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:49:47.514330  870218 main.go:141] libmachine: Using API Version  1
	I0429 12:49:47.514356  870218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:49:47.514754  870218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:49:47.514985  870218 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:49:47.515150  870218 start.go:316] joinCluster: &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:49:47.515340  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 12:49:47.515385  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:49:47.518988  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:47.519544  870218 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:49:47.519582  870218 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:49:47.519819  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:49:47.520072  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:49:47.520249  870218 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:49:47.520432  870218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:49:47.723515  870218 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:49:47.723578  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vnojfi.4shmv2la5ipmuekk --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m03 --control-plane --apiserver-advertise-address=192.168.39.109 --apiserver-bind-port=8443"
	I0429 12:50:11.558514  870218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vnojfi.4shmv2la5ipmuekk --discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-212075-m03 --control-plane --apiserver-advertise-address=192.168.39.109 --apiserver-bind-port=8443": (23.834907661s)
	I0429 12:50:11.558556  870218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 12:50:12.191163  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-212075-m03 minikube.k8s.io/updated_at=2024_04_29T12_50_12_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=ha-212075 minikube.k8s.io/primary=false
	I0429 12:50:12.332310  870218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-212075-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 12:50:12.457983  870218 start.go:318] duration metric: took 24.942805115s to joinCluster
	I0429 12:50:12.458080  870218 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:50:12.459647  870218 out.go:177] * Verifying Kubernetes components...
	I0429 12:50:12.458462  870218 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:50:12.460883  870218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:50:12.786218  870218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:50:12.827610  870218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:50:12.827947  870218 kapi.go:59] client config for ha-212075: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.crt", KeyFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key", CAFile:"/home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 12:50:12.828022  870218 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0429 12:50:12.828333  870218 node_ready.go:35] waiting up to 6m0s for node "ha-212075-m03" to be "Ready" ...
	I0429 12:50:12.828437  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:12.828446  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:12.828457  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:12.828466  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:12.832030  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:13.328972  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:13.329002  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:13.329012  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:13.329016  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:13.333488  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:13.829280  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:13.829310  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:13.829317  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:13.829321  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:13.833464  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:14.328604  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:14.328634  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:14.328647  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:14.328654  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:14.332954  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:14.829170  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:14.829194  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:14.829202  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:14.829206  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:14.833857  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:14.834791  870218 node_ready.go:53] node "ha-212075-m03" has status "Ready":"False"
	I0429 12:50:15.329188  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:15.329215  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:15.329224  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:15.329228  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:15.333208  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:15.829111  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:15.829145  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:15.829156  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:15.829162  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:15.833305  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:16.328929  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:16.328964  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:16.328977  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:16.328983  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:16.332860  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:16.828885  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:16.828914  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:16.828923  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:16.828928  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:16.832858  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:17.328914  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:17.328949  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:17.328960  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:17.328966  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:17.333213  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:17.334349  870218 node_ready.go:53] node "ha-212075-m03" has status "Ready":"False"
	I0429 12:50:17.828574  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:17.828607  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:17.828618  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:17.828622  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:17.833207  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:18.329374  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:18.329402  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.329410  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.329415  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.333316  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.828563  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:18.828591  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.828600  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.828603  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.833296  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:18.833938  870218 node_ready.go:49] node "ha-212075-m03" has status "Ready":"True"
	I0429 12:50:18.833966  870218 node_ready.go:38] duration metric: took 6.005614001s for node "ha-212075-m03" to be "Ready" ...
	I0429 12:50:18.833976  870218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:50:18.834052  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:18.834061  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.834069  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.834076  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.848836  870218 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 12:50:18.858243  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.858371  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2t8g
	I0429 12:50:18.858382  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.858395  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.858405  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.861615  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.862314  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:18.862334  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.862342  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.862347  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.865782  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.866457  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.866479  870218 pod_ready.go:81] duration metric: took 8.200804ms for pod "coredns-7db6d8ff4d-c2t8g" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.866490  870218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.866552  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x299s
	I0429 12:50:18.866560  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.866567  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.866572  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.870040  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.870670  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:18.870686  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.870696  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.870702  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.874007  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.874553  870218 pod_ready.go:92] pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.874575  870218 pod_ready.go:81] duration metric: took 8.079218ms for pod "coredns-7db6d8ff4d-x299s" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.874586  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.874655  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075
	I0429 12:50:18.874665  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.874674  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.874680  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.878665  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.879434  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:18.879460  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.879471  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.879478  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.882752  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:18.883423  870218 pod_ready.go:92] pod "etcd-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.883447  870218 pod_ready.go:81] duration metric: took 8.854916ms for pod "etcd-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.883459  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.883533  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m02
	I0429 12:50:18.883540  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.883548  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.883553  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.886433  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:50:18.887117  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:18.887135  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:18.887143  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:18.887147  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:18.889866  870218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:50:18.890552  870218 pod_ready.go:92] pod "etcd-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:18.890577  870218 pod_ready.go:81] duration metric: took 7.108063ms for pod "etcd-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:18.890591  870218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:19.028989  870218 request.go:629] Waited for 138.314405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.029067  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.029072  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.029080  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.029085  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.032924  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.229133  870218 request.go:629] Waited for 195.395298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.229202  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.229207  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.229217  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.229220  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.232853  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.428983  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.429028  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.429039  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.429044  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.432974  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.629442  870218 request.go:629] Waited for 195.37277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.629530  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:19.629538  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.629547  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.629554  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.633456  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:19.891393  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:19.891419  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:19.891429  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:19.891433  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:19.895758  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:20.028865  870218 request.go:629] Waited for 132.326326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.028930  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.028936  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.028944  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.028948  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.032892  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.390899  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:20.390924  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.390932  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.390936  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.394598  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.428843  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.428894  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.428908  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.428916  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.432578  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.891566  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:20.891605  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.891617  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.891624  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.895968  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:20.896842  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:20.896866  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:20.896880  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:20.896885  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:20.900749  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:20.901250  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:21.391390  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:21.391422  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.391432  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.391437  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.395486  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:21.396301  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:21.396323  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.396333  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.396337  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.399629  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:21.891709  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:21.891737  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.891746  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.891751  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.895525  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:21.896515  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:21.896534  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:21.896544  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:21.896549  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:21.899591  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:22.391086  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:22.391120  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.391129  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.391134  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.395272  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:22.396260  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:22.396280  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.396292  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.396298  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.399934  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:22.891706  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:22.891741  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.891754  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.891762  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.895857  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:22.896485  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:22.896507  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:22.896518  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:22.896524  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:22.899783  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:23.391764  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:23.391802  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.391813  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.391819  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.396591  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:23.397678  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:23.397697  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.397706  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.397712  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.401022  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:23.401638  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:23.891473  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:23.891518  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.891527  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.891531  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.895998  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:23.897166  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:23.897188  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:23.897198  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:23.897203  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:23.900834  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.391767  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:24.391793  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.391801  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.391806  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.395683  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.396364  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:24.396382  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.396390  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.396394  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.399833  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.890825  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:24.890853  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.890862  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.890866  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.894777  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:24.895773  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:24.895795  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:24.895803  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:24.895807  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:24.899690  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.391837  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:25.391877  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.391890  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.391897  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.395911  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.396638  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:25.396658  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.396667  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.396672  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.400113  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.890897  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:25.890925  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.890933  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.890937  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.894971  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:25.895752  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:25.895776  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:25.895786  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:25.895792  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:25.899327  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:25.900011  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:26.391728  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:26.391755  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.391766  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.391771  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.395868  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:26.396629  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:26.396652  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.396659  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.396662  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.399977  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:26.890872  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:26.890906  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.890916  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.890921  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.894973  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:26.895971  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:26.895993  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:26.896002  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:26.896005  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:26.899197  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:27.390790  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:27.390819  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.390828  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.390832  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.395080  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:27.396304  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:27.396327  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.396340  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.396345  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.399685  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:27.891328  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:27.891383  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.891396  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.891408  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.895963  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:27.896759  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:27.896777  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:27.896785  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:27.896789  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:27.900307  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:27.900827  870218 pod_ready.go:102] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 12:50:28.391176  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:28.391206  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.391214  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.391219  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.395304  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:28.396189  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:28.396216  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.396228  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.396234  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.399510  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.891313  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-212075-m03
	I0429 12:50:28.891343  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.891351  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.891368  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.896027  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:28.897004  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:28.897024  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.897033  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.897037  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.900096  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.900611  870218 pod_ready.go:92] pod "etcd-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.900634  870218 pod_ready.go:81] duration metric: took 10.01003378s for pod "etcd-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.900658  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.900736  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075
	I0429 12:50:28.900748  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.900759  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.900772  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.904062  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.905012  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:28.905033  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.905041  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.905046  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.914390  870218 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:50:28.915162  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.915184  870218 pod_ready.go:81] duration metric: took 14.517505ms for pod "kube-apiserver-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.915206  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.915288  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075-m02
	I0429 12:50:28.915299  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.915310  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.915316  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.918801  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.919480  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:28.919499  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.919511  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.919518  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.925028  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:50:28.925481  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.925502  870218 pod_ready.go:81] duration metric: took 10.2876ms for pod "kube-apiserver-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.925512  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.925600  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-212075-m03
	I0429 12:50:28.925609  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.925617  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.925622  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.930192  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:28.930899  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:28.930923  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.930934  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.930939  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.936104  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:50:28.936664  870218 pod_ready.go:92] pod "kube-apiserver-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.936690  870218 pod_ready.go:81] duration metric: took 11.171571ms for pod "kube-apiserver-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.936706  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.936798  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075
	I0429 12:50:28.936813  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.936821  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.936825  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.945481  870218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:50:28.946622  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:28.946652  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:28.946664  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:28.946671  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:28.950691  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:28.951321  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:28.951353  870218 pod_ready.go:81] duration metric: took 14.638423ms for pod "kube-controller-manager-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:28.951391  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.091822  870218 request.go:629] Waited for 140.318624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m02
	I0429 12:50:29.091900  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m02
	I0429 12:50:29.091906  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.091914  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.091922  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.096036  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:29.292235  870218 request.go:629] Waited for 195.433606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:29.292308  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:29.292313  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.292320  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.292327  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.296173  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:29.296828  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:29.296855  870218 pod_ready.go:81] duration metric: took 345.456965ms for pod "kube-controller-manager-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.296868  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.491886  870218 request.go:629] Waited for 194.903497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m03
	I0429 12:50:29.491968  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-212075-m03
	I0429 12:50:29.491976  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.491987  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.491995  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.497776  870218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:50:29.691599  870218 request.go:629] Waited for 192.371637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:29.691712  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:29.691717  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.691726  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.691731  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.695802  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:29.696538  870218 pod_ready.go:92] pod "kube-controller-manager-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:29.696564  870218 pod_ready.go:81] duration metric: took 399.690214ms for pod "kube-controller-manager-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.696579  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c27wn" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:29.892095  870218 request.go:629] Waited for 195.391435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27wn
	I0429 12:50:29.892181  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27wn
	I0429 12:50:29.892188  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:29.892199  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:29.892207  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:29.896134  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:30.092329  870218 request.go:629] Waited for 195.502679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:30.092433  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:30.092441  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.092452  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.092459  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.096200  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:30.096729  870218 pod_ready.go:92] pod "kube-proxy-c27wn" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:30.096753  870218 pod_ready.go:81] duration metric: took 400.166366ms for pod "kube-proxy-c27wn" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.096765  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.291287  870218 request.go:629] Waited for 194.439552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:50:30.291447  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ncdsk
	I0429 12:50:30.291461  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.291474  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.291485  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.297628  870218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:50:30.492144  870218 request.go:629] Waited for 193.412376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:30.492229  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:30.492248  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.492260  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.492268  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.496334  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:30.496860  870218 pod_ready.go:92] pod "kube-proxy-ncdsk" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:30.496885  870218 pod_ready.go:81] duration metric: took 400.112924ms for pod "kube-proxy-ncdsk" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.496899  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.691664  870218 request.go:629] Waited for 194.681632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:50:30.692181  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sfmhh
	I0429 12:50:30.692198  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.692210  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.692215  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.696295  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:30.891291  870218 request.go:629] Waited for 194.303719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:30.891444  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:30.891459  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:30.891470  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:30.891477  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:30.895385  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:30.896331  870218 pod_ready.go:92] pod "kube-proxy-sfmhh" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:30.896364  870218 pod_ready.go:81] duration metric: took 399.456713ms for pod "kube-proxy-sfmhh" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:30.896378  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.092314  870218 request.go:629] Waited for 195.838169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:50:31.092382  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075
	I0429 12:50:31.092387  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.092395  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.092403  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.096642  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:31.291736  870218 request.go:629] Waited for 194.40661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:31.291832  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075
	I0429 12:50:31.291839  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.291847  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.291853  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.295531  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:31.296465  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:31.296496  870218 pod_ready.go:81] duration metric: took 400.108799ms for pod "kube-scheduler-ha-212075" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.296513  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.491925  870218 request.go:629] Waited for 195.318095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:50:31.491999  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m02
	I0429 12:50:31.492008  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.492016  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.492029  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.496031  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:31.692069  870218 request.go:629] Waited for 195.409831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:31.692136  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m02
	I0429 12:50:31.692141  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.692149  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.692154  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.696368  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:31.697106  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:31.697129  870218 pod_ready.go:81] duration metric: took 400.605212ms for pod "kube-scheduler-ha-212075-m02" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.697143  870218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:31.891352  870218 request.go:629] Waited for 194.092342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m03
	I0429 12:50:31.891478  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-212075-m03
	I0429 12:50:31.891491  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:31.891503  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:31.891512  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:31.895522  870218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:50:32.091710  870218 request.go:629] Waited for 195.413817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:32.091787  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-212075-m03
	I0429 12:50:32.091792  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.091801  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.091806  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.095880  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:32.096637  870218 pod_ready.go:92] pod "kube-scheduler-ha-212075-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 12:50:32.096661  870218 pod_ready.go:81] duration metric: took 399.509578ms for pod "kube-scheduler-ha-212075-m03" in "kube-system" namespace to be "Ready" ...
	I0429 12:50:32.096673  870218 pod_ready.go:38] duration metric: took 13.262684539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:50:32.096690  870218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:50:32.096751  870218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:50:32.117800  870218 api_server.go:72] duration metric: took 19.659670409s to wait for apiserver process to appear ...
	I0429 12:50:32.117840  870218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:50:32.117869  870218 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0429 12:50:32.123543  870218 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0429 12:50:32.123629  870218 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0429 12:50:32.123638  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.123645  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.123653  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.124638  870218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0429 12:50:32.124714  870218 api_server.go:141] control plane version: v1.30.0
	I0429 12:50:32.124735  870218 api_server.go:131] duration metric: took 6.886333ms to wait for apiserver health ...
	I0429 12:50:32.124744  870218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:50:32.292129  870218 request.go:629] Waited for 167.303987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.292211  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.292216  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.292224  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.292230  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.299499  870218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:50:32.306492  870218 system_pods.go:59] 24 kube-system pods found
	I0429 12:50:32.306546  870218 system_pods.go:61] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:50:32.306553  870218 system_pods.go:61] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:50:32.306559  870218 system_pods.go:61] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:50:32.306564  870218 system_pods.go:61] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:50:32.306569  870218 system_pods.go:61] "etcd-ha-212075-m03" [92f8e094-a516-4426-a1c5-5f92d2022603] Running
	I0429 12:50:32.306575  870218 system_pods.go:61] "kindnet-2d8zp" [43b594a8-818d-423a-80f3-ad2b5dc79785] Running
	I0429 12:50:32.306579  870218 system_pods.go:61] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:50:32.306584  870218 system_pods.go:61] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:50:32.306591  870218 system_pods.go:61] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:50:32.306596  870218 system_pods.go:61] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:50:32.306600  870218 system_pods.go:61] "kube-apiserver-ha-212075-m03" [7484f88d-78bb-486c-9bc7-71c2a779083b] Running
	I0429 12:50:32.306605  870218 system_pods.go:61] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:50:32.306611  870218 system_pods.go:61] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:50:32.306620  870218 system_pods.go:61] "kube-controller-manager-ha-212075-m03" [94aae029-f109-447d-8080-4f41c99b4dbb] Running
	I0429 12:50:32.306626  870218 system_pods.go:61] "kube-proxy-c27wn" [c45c40a2-2b5d-495f-862a-9e54d6fd6a69] Running
	I0429 12:50:32.306634  870218 system_pods.go:61] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:50:32.306639  870218 system_pods.go:61] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:50:32.306647  870218 system_pods.go:61] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:50:32.306652  870218 system_pods.go:61] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:50:32.306660  870218 system_pods.go:61] "kube-scheduler-ha-212075-m03" [0029c03a-f2cd-4964-a1f9-71127fc72819] Running
	I0429 12:50:32.306665  870218 system_pods.go:61] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:50:32.306674  870218 system_pods.go:61] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:50:32.306678  870218 system_pods.go:61] "kube-vip-ha-212075-m03" [68d29842-0ac7-4c12-a12c-546a42040bb2] Running
	I0429 12:50:32.306685  870218 system_pods.go:61] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:50:32.306694  870218 system_pods.go:74] duration metric: took 181.939437ms to wait for pod list to return data ...
	I0429 12:50:32.306709  870218 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:50:32.491873  870218 request.go:629] Waited for 185.06023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:50:32.491951  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:50:32.491957  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.491964  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.491970  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.496044  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:32.496184  870218 default_sa.go:45] found service account: "default"
	I0429 12:50:32.496202  870218 default_sa.go:55] duration metric: took 189.48187ms for default service account to be created ...
	I0429 12:50:32.496212  870218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:50:32.692200  870218 request.go:629] Waited for 195.908867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.692277  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0429 12:50:32.692285  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.692298  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.692309  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.700260  870218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:50:32.707846  870218 system_pods.go:86] 24 kube-system pods found
	I0429 12:50:32.707891  870218 system_pods.go:89] "coredns-7db6d8ff4d-c2t8g" [343d2b3e-1dde-4bf1-b27a-d720d1b21ef4] Running
	I0429 12:50:32.707906  870218 system_pods.go:89] "coredns-7db6d8ff4d-x299s" [441b065a-2b42-4ac5-889e-c18200f43691] Running
	I0429 12:50:32.707913  870218 system_pods.go:89] "etcd-ha-212075" [4c8ad5e6-9375-455f-bae6-3fb3e8f51a0b] Running
	I0429 12:50:32.707920  870218 system_pods.go:89] "etcd-ha-212075-m02" [89f561a6-6871-405d-81fc-2d08b1746ffd] Running
	I0429 12:50:32.707927  870218 system_pods.go:89] "etcd-ha-212075-m03" [92f8e094-a516-4426-a1c5-5f92d2022603] Running
	I0429 12:50:32.707934  870218 system_pods.go:89] "kindnet-2d8zp" [43b594a8-818d-423a-80f3-ad2b5dc79785] Running
	I0429 12:50:32.707943  870218 system_pods.go:89] "kindnet-sx2zd" [a678c6bd-59c7-4620-9a5d-87d0dfd0f12c] Running
	I0429 12:50:32.707953  870218 system_pods.go:89] "kindnet-vnw75" [d7b71f12-5d80-4c41-ae97-a4d7e023ec98] Running
	I0429 12:50:32.707960  870218 system_pods.go:89] "kube-apiserver-ha-212075" [50f980d0-c58d-430b-90cb-3d821a13bf52] Running
	I0429 12:50:32.707970  870218 system_pods.go:89] "kube-apiserver-ha-212075-m02" [ca7d4290-16e5-4dea-a9a6-507931fa8acd] Running
	I0429 12:50:32.707977  870218 system_pods.go:89] "kube-apiserver-ha-212075-m03" [7484f88d-78bb-486c-9bc7-71c2a779083b] Running
	I0429 12:50:32.707984  870218 system_pods.go:89] "kube-controller-manager-ha-212075" [87261df5-c5e2-4d17-99bd-4e3d4c90d658] Running
	I0429 12:50:32.707997  870218 system_pods.go:89] "kube-controller-manager-ha-212075-m02" [83139960-a6ac-4cae-811f-2d55fb6114a6] Running
	I0429 12:50:32.708007  870218 system_pods.go:89] "kube-controller-manager-ha-212075-m03" [94aae029-f109-447d-8080-4f41c99b4dbb] Running
	I0429 12:50:32.708014  870218 system_pods.go:89] "kube-proxy-c27wn" [c45c40a2-2b5d-495f-862a-9e54d6fd6a69] Running
	I0429 12:50:32.708023  870218 system_pods.go:89] "kube-proxy-ncdsk" [632757a3-fa64-4483-af75-828e292ce184] Running
	I0429 12:50:32.708037  870218 system_pods.go:89] "kube-proxy-sfmhh" [6e4ed152-474f-4f58-84bb-16046d39e2ed] Running
	I0429 12:50:32.708043  870218 system_pods.go:89] "kube-scheduler-ha-212075" [1f0296ee-8103-4a99-b0ee-0730db753865] Running
	I0429 12:50:32.708049  870218 system_pods.go:89] "kube-scheduler-ha-212075-m02" [357354cb-865d-4b27-8adf-6324f178cafc] Running
	I0429 12:50:32.708055  870218 system_pods.go:89] "kube-scheduler-ha-212075-m03" [0029c03a-f2cd-4964-a1f9-71127fc72819] Running
	I0429 12:50:32.708062  870218 system_pods.go:89] "kube-vip-ha-212075" [44e6d402-7c09-4c33-9905-15f9d4a29381] Running
	I0429 12:50:32.708071  870218 system_pods.go:89] "kube-vip-ha-212075-m02" [d4927851-25a6-4b3d-84f6-95569c2fe4b7] Running
	I0429 12:50:32.708077  870218 system_pods.go:89] "kube-vip-ha-212075-m03" [68d29842-0ac7-4c12-a12c-546a42040bb2] Running
	I0429 12:50:32.708087  870218 system_pods.go:89] "storage-provisioner" [66e2d2b6-bf65-4b8a-ba39-9c99a83f633e] Running
	I0429 12:50:32.708096  870218 system_pods.go:126] duration metric: took 211.875538ms to wait for k8s-apps to be running ...
	I0429 12:50:32.708108  870218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:50:32.708158  870218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:50:32.729023  870218 system_svc.go:56] duration metric: took 20.905519ms WaitForService to wait for kubelet
	I0429 12:50:32.729060  870218 kubeadm.go:576] duration metric: took 20.270939588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:50:32.729082  870218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:50:32.891459  870218 request.go:629] Waited for 162.285599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0429 12:50:32.891529  870218 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0429 12:50:32.891542  870218 round_trippers.go:469] Request Headers:
	I0429 12:50:32.891550  870218 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:50:32.891556  870218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 12:50:32.895591  870218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:50:32.896692  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:50:32.896723  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:50:32.896738  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:50:32.896743  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:50:32.896748  870218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:50:32.896752  870218 node_conditions.go:123] node cpu capacity is 2
	I0429 12:50:32.896757  870218 node_conditions.go:105] duration metric: took 167.669117ms to run NodePressure ...
	I0429 12:50:32.896775  870218 start.go:240] waiting for startup goroutines ...
	I0429 12:50:32.896808  870218 start.go:254] writing updated cluster config ...
	I0429 12:50:32.897223  870218 ssh_runner.go:195] Run: rm -f paused
	I0429 12:50:32.955697  870218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 12:50:32.958387  870218 out.go:177] * Done! kubectl is now configured to use "ha-212075" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.375200057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395303375175613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f53fa985-eec9-47ff-81e8-f2004578db00 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.376154726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58ed0948-4871-4672-8e63-4bd191e84cde name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.376228350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58ed0948-4871-4672-8e63-4bd191e84cde name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.376459770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58ed0948-4871-4672-8e63-4bd191e84cde name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.417444895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99776b4f-8311-479a-98e2-2b6d900dad0f name=/runtime.v1.RuntimeService/Version
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.417523538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99776b4f-8311-479a-98e2-2b6d900dad0f name=/runtime.v1.RuntimeService/Version
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.418860258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa783a39-b411-4de1-b5a5-535c50284217 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.419284082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395303419256587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa783a39-b411-4de1-b5a5-535c50284217 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.419901343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=147152ef-7f99-4a93-8335-0d11fa7d3b43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.419957651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=147152ef-7f99-4a93-8335-0d11fa7d3b43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.420213697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=147152ef-7f99-4a93-8335-0d11fa7d3b43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.464797852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a01b1e6-8e01-41e1-8a51-f0fd6cf1386c name=/runtime.v1.RuntimeService/Version
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.464877025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a01b1e6-8e01-41e1-8a51-f0fd6cf1386c name=/runtime.v1.RuntimeService/Version
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.466333996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab139498-5427-4175-88f0-d91660421270 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.466889707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395303466862442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab139498-5427-4175-88f0-d91660421270 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.468298148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5770093d-060f-429b-bfc7-72d2327a6bb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.468456793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5770093d-060f-429b-bfc7-72d2327a6bb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.468810783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5770093d-060f-429b-bfc7-72d2327a6bb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.509450056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34dc27b5-83de-4d8c-8091-fd95616273d6 name=/runtime.v1.RuntimeService/Version
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.509965287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34dc27b5-83de-4d8c-8091-fd95616273d6 name=/runtime.v1.RuntimeService/Version
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.511588679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbbe45c4-11c7-42da-a519-86d328821720 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.512084657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395303512061103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbbe45c4-11c7-42da-a519-86d328821720 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.512724020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b6a98cc-be72-4993-ad97-9b29eb8be7bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.512777922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b6a98cc-be72-4993-ad97-9b29eb8be7bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 12:55:03 ha-212075 crio[678]: time="2024-04-29 12:55:03.513014549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395035320489557,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890366187649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714394890318589252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f,PodSandboxId:77a1ef53b73e0b1175a0b030e20cc727db07429b982b77afcbb43aa9e01b65f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714394890225724244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df,PodSandboxId:8f85b8a4ba604ff164d3558ddb0f0a19b427d7f03910a2fabb487a0d1e9cd3fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143948
87985326826,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714394887675604412,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523,PodSandboxId:13efcdd103317913e7e3068be22d5e63fce6354e6ff2080f5592b4188943988d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714394869189335644,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24190e4c2daab44202ef18cf148d0f29,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714394866813747579,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d,PodSandboxId:87588649b2c7923ea0d3d142063e04e513a3628028062e656137f21c6bf3b6f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714394866801264642,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16,PodSandboxId:f21fd4330dda0e4110f46aaae38cefac4d1c1af3e3e1bfc67f7f65c5b04578ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714394866715741281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714394866667237974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b6a98cc-be72-4993-ad97-9b29eb8be7bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6079fd69c4d07       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   377fa41dd93a5       busybox-fc5497c4f-rcq9m
	8923eb9969f74       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   e0bec542cd689       coredns-7db6d8ff4d-c2t8g
	a7bedc2be5698       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   cb2c23b3b3b1c       coredns-7db6d8ff4d-x299s
	7101dd3def458       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   77a1ef53b73e0       storage-provisioner
	85ad5484c2e54       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   8f85b8a4ba604       kindnet-vnw75
	ae027e60b2a1e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   84bca27dac841       kube-proxy-ncdsk
	0c3dc33eb6d5d       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   13efcdd103317       kube-vip-ha-212075
	220538e592762       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   258b9f1c2d733       kube-scheduler-ha-212075
	382081d5ba19b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   87588649b2c79       kube-controller-manager-ha-212075
	e9f8269450f85       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   f21fd4330dda0       kube-apiserver-ha-212075
	6ba91c742f08c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   814df27c007a6       etcd-ha-212075
	
	
	==> coredns [8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad] <==
	[INFO] 10.244.1.2:38828 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298968s
	[INFO] 10.244.1.2:58272 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000274219s
	[INFO] 10.244.1.2:41537 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154361s
	[INFO] 10.244.1.2:51430 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167101s
	[INFO] 10.244.0.4:39294 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002366491s
	[INFO] 10.244.0.4:47691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095562s
	[INFO] 10.244.0.4:49991 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146136s
	[INFO] 10.244.0.4:45880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133788s
	[INFO] 10.244.2.2:40297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017628s
	[INFO] 10.244.2.2:44282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001974026s
	[INFO] 10.244.2.2:48058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199321s
	[INFO] 10.244.2.2:50097 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220995s
	[INFO] 10.244.2.2:60877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132114s
	[INFO] 10.244.2.2:38824 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114121s
	[INFO] 10.244.1.2:60691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192262s
	[INFO] 10.244.1.2:51664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123427s
	[INFO] 10.244.1.2:57326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156295s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105493s
	[INFO] 10.244.0.4:39454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248959s
	[INFO] 10.244.2.2:56559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010789s
	[INFO] 10.244.1.2:57860 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144445s
	[INFO] 10.244.1.2:40470 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145332s
	[INFO] 10.244.0.4:35067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124783s
	[INFO] 10.244.2.2:47889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150138s
	[INFO] 10.244.2.2:60310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091159s
	
	
	==> coredns [a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6] <==
	[INFO] 10.244.2.2:49673 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00011362s
	[INFO] 10.244.2.2:52287 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001865115s
	[INFO] 10.244.1.2:45655 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028655691s
	[INFO] 10.244.1.2:34986 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171925s
	[INFO] 10.244.1.2:48145 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003370523s
	[INFO] 10.244.1.2:43604 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158954s
	[INFO] 10.244.0.4:58453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127934s
	[INFO] 10.244.0.4:52484 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001277738s
	[INFO] 10.244.0.4:47770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128102s
	[INFO] 10.244.0.4:53060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103039s
	[INFO] 10.244.2.2:55991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854135s
	[INFO] 10.244.2.2:33533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090157s
	[INFO] 10.244.1.2:52893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090867s
	[INFO] 10.244.0.4:54479 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102901s
	[INFO] 10.244.0.4:53525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000359828s
	[INFO] 10.244.2.2:57755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000264423s
	[INFO] 10.244.2.2:47852 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118616s
	[INFO] 10.244.2.2:38289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112347s
	[INFO] 10.244.1.2:55092 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184788s
	[INFO] 10.244.1.2:52235 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146353s
	[INFO] 10.244.0.4:55598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137209s
	[INFO] 10.244.0.4:54649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121493s
	[INFO] 10.244.0.4:50694 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136791s
	[INFO] 10.244.2.2:49177 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104896s
	[INFO] 10.244.2.2:41037 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088839s
	
	
	==> describe nodes <==
	Name:               ha-212075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_54_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:55:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:50:56 +0000   Mon, 29 Apr 2024 12:48:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-212075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eefe9cc034f74464a919edd5f6b61c2b
	  System UUID:                eefe9cc0-34f7-4464-a919-edd5f6b61c2b
	  Boot ID:                    20b6e47d-4696-4b2a-ba7c-62e73184f5c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rcq9m              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-c2t8g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m56s
	  kube-system                 coredns-7db6d8ff4d-x299s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m56s
	  kube-system                 etcd-ha-212075                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m10s
	  kube-system                 kindnet-vnw75                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m57s
	  kube-system                 kube-apiserver-ha-212075             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-controller-manager-ha-212075    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-proxy-ncdsk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	  kube-system                 kube-scheduler-ha-212075             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-vip-ha-212075                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  Starting                 7m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m17s (x7 over 7m17s)  kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m17s (x8 over 7m17s)  kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s (x8 over 7m17s)  kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s                  kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s                  kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s                  kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m57s                  node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal  NodeReady                6m54s                  kubelet          Node ha-212075 status is now: NodeReady
	  Normal  RegisteredNode           5m46s                  node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal  RegisteredNode           4m36s                  node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	
	
	Name:               ha-212075-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_49_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:48:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:51:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 12:51:02 +0000   Mon, 29 Apr 2024 12:52:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    ha-212075-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 088c5f79339047d6aaf2c88397c97942
	  System UUID:                088c5f79-3390-47d6-aaf2-c88397c97942
	  Boot ID:                    26514912-b71e-458e-b679-e7e1ba2580cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9q8rf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-212075-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m4s
	  kube-system                 kindnet-sx2zd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-apiserver-ha-212075-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-controller-manager-ha-212075-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-sfmhh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-scheduler-ha-212075-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-vip-ha-212075-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-212075-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m2s                 node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           5m46s                node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           4m36s                node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  NodeNotReady             2m41s                node-controller  Node ha-212075-m02 status is now: NodeNotReady
	
	
	Name:               ha-212075-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_50_12_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:50:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:54:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:50:39 +0000   Mon, 29 Apr 2024 12:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-212075-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 535ef7f0e3c949a7801d0ab8f3e70b91
	  System UUID:                535ef7f0-e3c9-49a7-801d-0ab8f3e70b91
	  Boot ID:                    bc34a15b-f8d2-49d7-ac30-f44e734d2ed5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xw452                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-212075-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-2d8zp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-ha-212075-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-ha-212075-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-c27wn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-ha-212075-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-vip-ha-212075-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-212075-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal  RegisteredNode           4m36s                  node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	
	
	Name:               ha-212075-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_51_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:54:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:51:43 +0000   Mon, 29 Apr 2024 12:51:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-212075-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee58aa83584b463285f294fa28d19e05
	  System UUID:                ee58aa83-584b-4632-85f2-94fa28d19e05
	  Boot ID:                    05003e8a-1683-4cb9-9ad4-cb1e00255e69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d6tbw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-bnbr8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x3 over 3m52s)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x3 over 3m52s)  kubelet          Node ha-212075-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x3 over 3m52s)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal  NodeReady                3m43s                  kubelet          Node ha-212075-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053063] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042417] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.595489] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.659216] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.658122] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.473754] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.066120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063959] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.171013] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136787] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.290881] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.567542] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.067175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.833264] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +1.214053] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.340857] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.082766] kauditd_printk_skb: 40 callbacks suppressed
	[Apr29 12:48] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.082460] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab] <==
	{"level":"warn","ts":"2024-04-29T12:55:03.84182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.855009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.857395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.864899Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.8739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.87834Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.883275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.89485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.904756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.919862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.925977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.926955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.931902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.947767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.954932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.955071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.9647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.969698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.97501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.984629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:03.993204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:04.004959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:04.043888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:04.045372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:55:04.054521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:55:04 up 7 min,  0 users,  load average: 0.31, 0.24, 0.12
	Linux ha-212075 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [85ad5484c2e545d7d4327f4b4a1257ea8f1bdf2af728bf6ff304883f154269df] <==
	I0429 12:54:29.654436       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 12:54:39.669953       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 12:54:39.669997       1 main.go:227] handling current node
	I0429 12:54:39.670008       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 12:54:39.670092       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 12:54:39.670217       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 12:54:39.670248       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 12:54:39.670297       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 12:54:39.670320       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 12:54:49.676716       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 12:54:49.676744       1 main.go:227] handling current node
	I0429 12:54:49.676755       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 12:54:49.676760       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 12:54:49.676885       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 12:54:49.676910       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 12:54:49.676962       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 12:54:49.676967       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 12:54:59.685164       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 12:54:59.685219       1 main.go:227] handling current node
	I0429 12:54:59.685238       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 12:54:59.685243       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 12:54:59.685546       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 12:54:59.685577       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 12:54:59.685637       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 12:54:59.685700       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16] <==
	I0429 12:47:52.206856       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 12:47:52.212802       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 12:47:53.195190       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 12:47:53.272240       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 12:47:53.319588       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 12:47:53.336489       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 12:48:06.889872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:48:06.889872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:48:07.345600       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0429 12:50:37.199894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50238: use of closed network connection
	E0429 12:50:37.422099       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50256: use of closed network connection
	E0429 12:50:37.642894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50274: use of closed network connection
	E0429 12:50:37.900488       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50288: use of closed network connection
	E0429 12:50:38.116582       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50304: use of closed network connection
	E0429 12:50:38.344331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50320: use of closed network connection
	E0429 12:50:38.560181       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50328: use of closed network connection
	E0429 12:50:38.773132       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50354: use of closed network connection
	E0429 12:50:39.005884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50368: use of closed network connection
	E0429 12:50:39.389303       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50394: use of closed network connection
	E0429 12:50:39.600752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50404: use of closed network connection
	E0429 12:50:39.815611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50424: use of closed network connection
	E0429 12:50:40.035011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50440: use of closed network connection
	E0429 12:50:40.245499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50462: use of closed network connection
	E0429 12:50:40.457578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50492: use of closed network connection
	W0429 12:51:52.205330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.97]
	
	
	==> kube-controller-manager [382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d] <==
	I0429 12:49:01.445000       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-212075-m02"
	I0429 12:50:08.617501       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-212075-m03\" does not exist"
	I0429 12:50:08.663917       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-212075-m03" podCIDRs=["10.244.2.0/24"]
	I0429 12:50:11.474203       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-212075-m03"
	I0429 12:50:34.047890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.017808ms"
	I0429 12:50:34.146964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.922663ms"
	I0429 12:50:34.376985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="229.814363ms"
	E0429 12:50:34.377038       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 12:50:34.377201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.971µs"
	I0429 12:50:34.384897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.618µs"
	I0429 12:50:35.869494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.517378ms"
	I0429 12:50:35.869952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.254µs"
	I0429 12:50:36.030792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.135927ms"
	I0429 12:50:36.030945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.996µs"
	I0429 12:50:36.451336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.394422ms"
	I0429 12:50:36.451462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.817µs"
	I0429 12:51:12.921789       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-212075-m04\" does not exist"
	I0429 12:51:12.964704       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-212075-m04" podCIDRs=["10.244.3.0/24"]
	I0429 12:51:16.503129       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-212075-m04"
	I0429 12:51:21.706750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-212075-m04"
	I0429 12:52:22.540869       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-212075-m04"
	I0429 12:52:22.611915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.169574ms"
	I0429 12:52:22.612104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.399µs"
	I0429 12:52:22.663107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.847549ms"
	I0429 12:52:22.663624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.292µs"
	
	
	==> kube-proxy [ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d] <==
	I0429 12:48:07.949311       1 server_linux.go:69] "Using iptables proxy"
	I0429 12:48:07.970912       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0429 12:48:08.111869       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:48:08.111971       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:48:08.112000       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:48:08.119057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:48:08.119347       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:48:08.119376       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:48:08.121414       1 config.go:192] "Starting service config controller"
	I0429 12:48:08.121424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:48:08.121444       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:48:08.121447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:48:08.121966       1 config.go:319] "Starting node config controller"
	I0429 12:48:08.121973       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:48:08.222195       1 shared_informer.go:320] Caches are synced for node config
	I0429 12:48:08.222222       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:48:08.222241       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf] <==
	W0429 12:47:51.705755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:47:51.705803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:47:51.705952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:47:51.705995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:47:51.762409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:47:51.762955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:47:51.831035       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:47:51.831222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0429 12:47:54.028994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 12:50:33.990786       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xw452\": pod busybox-fc5497c4f-xw452 is already assigned to node \"ha-212075-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xw452" node="ha-212075-m03"
	E0429 12:50:33.992233       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f23383ad-d9ed-46ed-9327-d850179b2822(default/busybox-fc5497c4f-xw452) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-xw452"
	E0429 12:50:33.996630       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xw452\": pod busybox-fc5497c4f-xw452 is already assigned to node \"ha-212075-m03\"" pod="default/busybox-fc5497c4f-xw452"
	I0429 12:50:33.997434       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-xw452" node="ha-212075-m03"
	E0429 12:51:13.049010       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d6tbw\": pod kindnet-d6tbw is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-d6tbw" node="ha-212075-m04"
	E0429 12:51:13.049114       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7effb27d-adcf-42ce-9c98-d1cb8db7fd04(kube-system/kindnet-d6tbw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d6tbw"
	E0429 12:51:13.049139       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d6tbw\": pod kindnet-d6tbw is already assigned to node \"ha-212075-m04\"" pod="kube-system/kindnet-d6tbw"
	I0429 12:51:13.049158       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d6tbw" node="ha-212075-m04"
	E0429 12:51:13.049476       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bnbr8\": pod kube-proxy-bnbr8 is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bnbr8" node="ha-212075-m04"
	E0429 12:51:13.049643       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 16945b1d-2d33-4a95-b9ad-03d0665b74e8(kube-system/kube-proxy-bnbr8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bnbr8"
	E0429 12:51:13.049799       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bnbr8\": pod kube-proxy-bnbr8 is already assigned to node \"ha-212075-m04\"" pod="kube-system/kube-proxy-bnbr8"
	I0429 12:51:13.049931       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bnbr8" node="ha-212075-m04"
	E0429 12:51:13.198840       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9qm85\": pod kindnet-9qm85 is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9qm85" node="ha-212075-m04"
	E0429 12:51:13.199120       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f060a9cf-2fcb-4ef4-8991-954beeaa1614(kube-system/kindnet-9qm85) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9qm85"
	E0429 12:51:13.199784       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9qm85\": pod kindnet-9qm85 is already assigned to node \"ha-212075-m04\"" pod="kube-system/kindnet-9qm85"
	I0429 12:51:13.199829       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9qm85" node="ha-212075-m04"
	
	
	==> kubelet <==
	Apr 29 12:50:53 ha-212075 kubelet[1362]: E0429 12:50:53.153020    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:50:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:50:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:50:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:50:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:51:53 ha-212075 kubelet[1362]: E0429 12:51:53.156281    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:51:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:51:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:51:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:51:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:52:53 ha-212075 kubelet[1362]: E0429 12:52:53.152345    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:52:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:52:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:52:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:52:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:53:53 ha-212075 kubelet[1362]: E0429 12:53:53.152414    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:53:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:53:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:53:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:53:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:54:53 ha-212075 kubelet[1362]: E0429 12:54:53.152844    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:54:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:54:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:54:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:54:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-212075 -n ha-212075
helpers_test.go:261: (dbg) Run:  kubectl --context ha-212075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-212075 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-212075 -v=7 --alsologtostderr
E0429 12:56:19.252823  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:56:46.937746  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-212075 -v=7 --alsologtostderr: exit status 82 (2m2.069419469s)

                                                
                                                
-- stdout --
	* Stopping node "ha-212075-m04"  ...
	* Stopping node "ha-212075-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:55:05.685612  876028 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:55:05.685925  876028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:55:05.685942  876028 out.go:304] Setting ErrFile to fd 2...
	I0429 12:55:05.685948  876028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:55:05.686182  876028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:55:05.686436  876028 out.go:298] Setting JSON to false
	I0429 12:55:05.686525  876028 mustload.go:65] Loading cluster: ha-212075
	I0429 12:55:05.686894  876028 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:55:05.687006  876028 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:55:05.687185  876028 mustload.go:65] Loading cluster: ha-212075
	I0429 12:55:05.687325  876028 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:55:05.687354  876028 stop.go:39] StopHost: ha-212075-m04
	I0429 12:55:05.687837  876028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:05.687889  876028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:05.704297  876028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36337
	I0429 12:55:05.704889  876028 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:05.705647  876028 main.go:141] libmachine: Using API Version  1
	I0429 12:55:05.705677  876028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:05.706104  876028 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:05.708205  876028 out.go:177] * Stopping node "ha-212075-m04"  ...
	I0429 12:55:05.709292  876028 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 12:55:05.709330  876028 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 12:55:05.709650  876028 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 12:55:05.709692  876028 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 12:55:05.713062  876028 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:55:05.713538  876028 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:50:55 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 12:55:05.713574  876028 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 12:55:05.713711  876028 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 12:55:05.713923  876028 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 12:55:05.714120  876028 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 12:55:05.714296  876028 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 12:55:05.799449  876028 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 12:55:05.854794  876028 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 12:55:05.909997  876028 main.go:141] libmachine: Stopping "ha-212075-m04"...
	I0429 12:55:05.910064  876028 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:55:05.912063  876028 main.go:141] libmachine: (ha-212075-m04) Calling .Stop
	I0429 12:55:05.916258  876028 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 0/120
	I0429 12:55:07.230140  876028 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 12:55:07.231620  876028 main.go:141] libmachine: Machine "ha-212075-m04" was stopped.
	I0429 12:55:07.231651  876028 stop.go:75] duration metric: took 1.522359961s to stop
	I0429 12:55:07.231680  876028 stop.go:39] StopHost: ha-212075-m03
	I0429 12:55:07.232001  876028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:55:07.232058  876028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:55:07.249259  876028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0429 12:55:07.249761  876028 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:55:07.250339  876028 main.go:141] libmachine: Using API Version  1
	I0429 12:55:07.250368  876028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:55:07.250714  876028 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:55:07.253247  876028 out.go:177] * Stopping node "ha-212075-m03"  ...
	I0429 12:55:07.254589  876028 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 12:55:07.254629  876028 main.go:141] libmachine: (ha-212075-m03) Calling .DriverName
	I0429 12:55:07.255004  876028 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 12:55:07.255038  876028 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHHostname
	I0429 12:55:07.258595  876028 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:55:07.259090  876028 main.go:141] libmachine: (ha-212075-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:04:a1", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:49:36 +0000 UTC Type:0 Mac:52:54:00:1c:04:a1 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-212075-m03 Clientid:01:52:54:00:1c:04:a1}
	I0429 12:55:07.259149  876028 main.go:141] libmachine: (ha-212075-m03) DBG | domain ha-212075-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:1c:04:a1 in network mk-ha-212075
	I0429 12:55:07.259336  876028 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHPort
	I0429 12:55:07.259611  876028 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHKeyPath
	I0429 12:55:07.259833  876028 main.go:141] libmachine: (ha-212075-m03) Calling .GetSSHUsername
	I0429 12:55:07.259997  876028 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m03/id_rsa Username:docker}
	I0429 12:55:07.347880  876028 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 12:55:07.403577  876028 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 12:55:07.460737  876028 main.go:141] libmachine: Stopping "ha-212075-m03"...
	I0429 12:55:07.460797  876028 main.go:141] libmachine: (ha-212075-m03) Calling .GetState
	I0429 12:55:07.462514  876028 main.go:141] libmachine: (ha-212075-m03) Calling .Stop
	I0429 12:55:07.466332  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 0/120
	I0429 12:55:08.468034  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 1/120
	I0429 12:55:09.470516  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 2/120
	I0429 12:55:10.472035  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 3/120
	I0429 12:55:11.474307  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 4/120
	I0429 12:55:12.475847  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 5/120
	I0429 12:55:13.478118  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 6/120
	I0429 12:55:14.480046  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 7/120
	I0429 12:55:15.482014  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 8/120
	I0429 12:55:16.483740  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 9/120
	I0429 12:55:17.486226  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 10/120
	I0429 12:55:18.487981  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 11/120
	I0429 12:55:19.490064  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 12/120
	I0429 12:55:20.492058  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 13/120
	I0429 12:55:21.493604  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 14/120
	I0429 12:55:22.495851  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 15/120
	I0429 12:55:23.497521  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 16/120
	I0429 12:55:24.499077  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 17/120
	I0429 12:55:25.501005  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 18/120
	I0429 12:55:26.502503  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 19/120
	I0429 12:55:27.504794  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 20/120
	I0429 12:55:28.506879  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 21/120
	I0429 12:55:29.508383  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 22/120
	I0429 12:55:30.510107  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 23/120
	I0429 12:55:31.511628  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 24/120
	I0429 12:55:32.513645  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 25/120
	I0429 12:55:33.515428  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 26/120
	I0429 12:55:34.516902  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 27/120
	I0429 12:55:35.518656  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 28/120
	I0429 12:55:36.520153  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 29/120
	I0429 12:55:37.522329  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 30/120
	I0429 12:55:38.524418  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 31/120
	I0429 12:55:39.526169  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 32/120
	I0429 12:55:40.528235  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 33/120
	I0429 12:55:41.529806  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 34/120
	I0429 12:55:42.531856  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 35/120
	I0429 12:55:43.534196  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 36/120
	I0429 12:55:44.535909  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 37/120
	I0429 12:55:45.537744  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 38/120
	I0429 12:55:46.539317  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 39/120
	I0429 12:55:47.541331  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 40/120
	I0429 12:55:48.542799  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 41/120
	I0429 12:55:49.544535  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 42/120
	I0429 12:55:50.546881  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 43/120
	I0429 12:55:51.548572  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 44/120
	I0429 12:55:52.550677  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 45/120
	I0429 12:55:53.552242  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 46/120
	I0429 12:55:54.553825  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 47/120
	I0429 12:55:55.555265  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 48/120
	I0429 12:55:56.556695  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 49/120
	I0429 12:55:57.558755  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 50/120
	I0429 12:55:58.560357  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 51/120
	I0429 12:55:59.561835  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 52/120
	I0429 12:56:00.563316  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 53/120
	I0429 12:56:01.565006  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 54/120
	I0429 12:56:02.567218  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 55/120
	I0429 12:56:03.568655  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 56/120
	I0429 12:56:04.570101  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 57/120
	I0429 12:56:05.571626  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 58/120
	I0429 12:56:06.574071  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 59/120
	I0429 12:56:07.576085  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 60/120
	I0429 12:56:08.577657  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 61/120
	I0429 12:56:09.579126  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 62/120
	I0429 12:56:10.580747  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 63/120
	I0429 12:56:11.582441  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 64/120
	I0429 12:56:12.584858  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 65/120
	I0429 12:56:13.586481  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 66/120
	I0429 12:56:14.588013  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 67/120
	I0429 12:56:15.590173  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 68/120
	I0429 12:56:16.591764  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 69/120
	I0429 12:56:17.593340  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 70/120
	I0429 12:56:18.594775  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 71/120
	I0429 12:56:19.596429  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 72/120
	I0429 12:56:20.597842  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 73/120
	I0429 12:56:21.599410  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 74/120
	I0429 12:56:22.601428  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 75/120
	I0429 12:56:23.602890  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 76/120
	I0429 12:56:24.604344  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 77/120
	I0429 12:56:25.605893  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 78/120
	I0429 12:56:26.607329  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 79/120
	I0429 12:56:27.609395  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 80/120
	I0429 12:56:28.610977  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 81/120
	I0429 12:56:29.612568  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 82/120
	I0429 12:56:30.614209  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 83/120
	I0429 12:56:31.616082  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 84/120
	I0429 12:56:32.618168  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 85/120
	I0429 12:56:33.619688  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 86/120
	I0429 12:56:34.621140  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 87/120
	I0429 12:56:35.622587  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 88/120
	I0429 12:56:36.624181  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 89/120
	I0429 12:56:37.626168  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 90/120
	I0429 12:56:38.627786  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 91/120
	I0429 12:56:39.629113  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 92/120
	I0429 12:56:40.630589  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 93/120
	I0429 12:56:41.632084  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 94/120
	I0429 12:56:42.634048  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 95/120
	I0429 12:56:43.635508  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 96/120
	I0429 12:56:44.636761  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 97/120
	I0429 12:56:45.638240  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 98/120
	I0429 12:56:46.640034  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 99/120
	I0429 12:56:47.642406  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 100/120
	I0429 12:56:48.644003  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 101/120
	I0429 12:56:49.645730  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 102/120
	I0429 12:56:50.647737  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 103/120
	I0429 12:56:51.649405  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 104/120
	I0429 12:56:52.651801  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 105/120
	I0429 12:56:53.653249  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 106/120
	I0429 12:56:54.654746  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 107/120
	I0429 12:56:55.656388  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 108/120
	I0429 12:56:56.657848  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 109/120
	I0429 12:56:57.660133  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 110/120
	I0429 12:56:58.661639  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 111/120
	I0429 12:56:59.663329  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 112/120
	I0429 12:57:00.664769  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 113/120
	I0429 12:57:01.666940  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 114/120
	I0429 12:57:02.668899  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 115/120
	I0429 12:57:03.670435  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 116/120
	I0429 12:57:04.671942  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 117/120
	I0429 12:57:05.673389  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 118/120
	I0429 12:57:06.675310  876028 main.go:141] libmachine: (ha-212075-m03) Waiting for machine to stop 119/120
	I0429 12:57:07.676520  876028 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 12:57:07.676604  876028 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 12:57:07.678478  876028 out.go:177] 
	W0429 12:57:07.679739  876028 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 12:57:07.679759  876028 out.go:239] * 
	* 
	W0429 12:57:07.689310  876028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:57:07.690811  876028 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-212075 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-212075 --wait=true -v=7 --alsologtostderr
E0429 13:01:19.252968  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-212075 --wait=true -v=7 --alsologtostderr: (4m46.590344211s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-212075
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-212075 -n ha-212075
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-212075 logs -n 25: (1.977733106s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m04 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp testdata/cp-test.txt                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075:/home/docker/cp-test_ha-212075-m04_ha-212075.txt                       |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075 sudo cat                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075.txt                                 |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03:/home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m03 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-212075 node stop m02 -v=7                                                     | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-212075 node start m02 -v=7                                                    | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-212075 -v=7                                                           | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-212075 -v=7                                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-212075 --wait=true -v=7                                                    | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 13:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-212075                                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 13:01 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:57:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:57:07.760223  876497 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:57:07.760496  876497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:57:07.760505  876497 out.go:304] Setting ErrFile to fd 2...
	I0429 12:57:07.760509  876497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:57:07.760696  876497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:57:07.761408  876497 out.go:298] Setting JSON to false
	I0429 12:57:07.762431  876497 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":77973,"bootTime":1714317455,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:57:07.762504  876497 start.go:139] virtualization: kvm guest
	I0429 12:57:07.765968  876497 out.go:177] * [ha-212075] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:57:07.767581  876497 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:57:07.767593  876497 notify.go:220] Checking for updates...
	I0429 12:57:07.770657  876497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:57:07.772323  876497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:57:07.773571  876497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:57:07.774881  876497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:57:07.776286  876497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:57:07.778067  876497 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:57:07.778212  876497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:57:07.778669  876497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:57:07.778725  876497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:57:07.795136  876497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0429 12:57:07.795626  876497 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:57:07.796273  876497 main.go:141] libmachine: Using API Version  1
	I0429 12:57:07.796304  876497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:57:07.796683  876497 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:57:07.796933  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:57:07.837512  876497 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 12:57:07.838733  876497 start.go:297] selected driver: kvm2
	I0429 12:57:07.838753  876497 start.go:901] validating driver "kvm2" against &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:57:07.838931  876497 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:57:07.839397  876497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:57:07.839509  876497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:57:07.856158  876497 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:57:07.856991  876497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:57:07.857077  876497 cni.go:84] Creating CNI manager for ""
	I0429 12:57:07.857094  876497 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 12:57:07.857187  876497 start.go:340] cluster config:
	{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:57:07.857339  876497 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:57:07.860264  876497 out.go:177] * Starting "ha-212075" primary control-plane node in "ha-212075" cluster
	I0429 12:57:07.861641  876497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:57:07.861703  876497 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 12:57:07.861720  876497 cache.go:56] Caching tarball of preloaded images
	I0429 12:57:07.861834  876497 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:57:07.861849  876497 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:57:07.862023  876497 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:57:07.862286  876497 start.go:360] acquireMachinesLock for ha-212075: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:57:07.862362  876497 start.go:364] duration metric: took 48.339µs to acquireMachinesLock for "ha-212075"
	I0429 12:57:07.862383  876497 start.go:96] Skipping create...Using existing machine configuration
	I0429 12:57:07.862393  876497 fix.go:54] fixHost starting: 
	I0429 12:57:07.862725  876497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:57:07.862777  876497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:57:07.878948  876497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0429 12:57:07.879474  876497 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:57:07.880187  876497 main.go:141] libmachine: Using API Version  1
	I0429 12:57:07.880216  876497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:57:07.880562  876497 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:57:07.880783  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:57:07.881015  876497 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:57:07.882804  876497 fix.go:112] recreateIfNeeded on ha-212075: state=Running err=<nil>
	W0429 12:57:07.882827  876497 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 12:57:07.885265  876497 out.go:177] * Updating the running kvm2 "ha-212075" VM ...
	I0429 12:57:07.887103  876497 machine.go:94] provisionDockerMachine start ...
	I0429 12:57:07.887132  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:57:07.887479  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:07.890580  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:07.891104  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:07.891144  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:07.891318  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:07.891570  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:07.891755  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:07.891925  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:07.892090  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:07.892311  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:07.892323  876497 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:57:08.013219  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075
	
	I0429 12:57:08.013257  876497 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:57:08.013555  876497 buildroot.go:166] provisioning hostname "ha-212075"
	I0429 12:57:08.013586  876497 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:57:08.013815  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.017015  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.017475  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.017527  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.017685  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.017923  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.018104  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.018293  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.018532  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:08.018721  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:08.018733  876497 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075 && echo "ha-212075" | sudo tee /etc/hostname
	I0429 12:57:08.160624  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075
	
	I0429 12:57:08.160661  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.164072  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.164572  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.164600  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.164930  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.165141  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.165367  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.165523  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.165697  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:08.165944  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:08.165962  876497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:57:08.284806  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:57:08.284856  876497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:57:08.284922  876497 buildroot.go:174] setting up certificates
	I0429 12:57:08.284936  876497 provision.go:84] configureAuth start
	I0429 12:57:08.284950  876497 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:57:08.285301  876497 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:57:08.288155  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.288573  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.288603  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.288864  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.291442  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.291890  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.291933  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.292115  876497 provision.go:143] copyHostCerts
	I0429 12:57:08.292152  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:57:08.292192  876497 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:57:08.292202  876497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:57:08.292280  876497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:57:08.292365  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:57:08.292383  876497 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:57:08.292390  876497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:57:08.292421  876497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:57:08.292461  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:57:08.292477  876497 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:57:08.292483  876497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:57:08.292503  876497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:57:08.292548  876497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075 san=[127.0.0.1 192.168.39.97 ha-212075 localhost minikube]
	I0429 12:57:08.521682  876497 provision.go:177] copyRemoteCerts
	I0429 12:57:08.521785  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:57:08.521818  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.524700  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.525082  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.525111  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.525305  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.525563  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.525751  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.525885  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:57:08.614730  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:57:08.614834  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:57:08.644874  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:57:08.644966  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0429 12:57:08.673023  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:57:08.673123  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:57:08.703550  876497 provision.go:87] duration metric: took 418.591409ms to configureAuth
	I0429 12:57:08.703592  876497 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:57:08.703885  876497 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:57:08.703995  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.707033  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.707427  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.707453  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.707597  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.707853  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.708049  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.708191  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.708350  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:08.708529  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:08.708545  876497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:58:39.641978  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:58:39.642026  876497 machine.go:97] duration metric: took 1m31.754906281s to provisionDockerMachine
	I0429 12:58:39.642048  876497 start.go:293] postStartSetup for "ha-212075" (driver="kvm2")
	I0429 12:58:39.642066  876497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:58:39.642088  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:39.642488  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:58:39.642522  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:39.646396  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.646995  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:39.647029  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.647207  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:39.647471  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.647665  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:39.647845  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:58:39.741143  876497 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:58:39.746127  876497 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:58:39.746180  876497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:58:39.746258  876497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:58:39.746360  876497 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:58:39.746376  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:58:39.746469  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:58:39.757364  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:58:39.785516  876497 start.go:296] duration metric: took 143.445627ms for postStartSetup
	I0429 12:58:39.785583  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:39.785962  876497 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0429 12:58:39.785996  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:39.789244  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.789721  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:39.789751  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.789957  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:39.790227  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.790424  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:39.790606  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	W0429 12:58:39.892000  876497 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0429 12:58:39.892033  876497 fix.go:56] duration metric: took 1m32.029642416s for fixHost
	I0429 12:58:39.892059  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:39.895314  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.895756  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:39.895785  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.896022  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:39.896287  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.896496  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.896646  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:39.896896  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:58:39.897093  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:58:39.897104  876497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:58:40.016888  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714395519.975956801
	
	I0429 12:58:40.016920  876497 fix.go:216] guest clock: 1714395519.975956801
	I0429 12:58:40.016929  876497 fix.go:229] Guest: 2024-04-29 12:58:39.975956801 +0000 UTC Remote: 2024-04-29 12:58:39.892041949 +0000 UTC m=+92.188304742 (delta=83.914852ms)
	I0429 12:58:40.016953  876497 fix.go:200] guest clock delta is within tolerance: 83.914852ms
	I0429 12:58:40.016958  876497 start.go:83] releasing machines lock for "ha-212075", held for 1m32.154586068s
	I0429 12:58:40.016979  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.017307  876497 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:58:40.020445  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.020875  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:40.020898  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.021096  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.021714  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.021956  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.022074  876497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:58:40.022137  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:40.022188  876497 ssh_runner.go:195] Run: cat /version.json
	I0429 12:58:40.022217  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:40.024900  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025264  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025521  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:40.025555  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025725  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:40.025834  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:40.025868  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025914  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:40.026058  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:40.026145  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:40.026253  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:40.026352  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:58:40.026372  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:40.026530  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:58:40.109004  876497 ssh_runner.go:195] Run: systemctl --version
	I0429 12:58:40.142683  876497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:58:40.319745  876497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:58:40.326473  876497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:58:40.326595  876497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:58:40.337050  876497 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 12:58:40.337094  876497 start.go:494] detecting cgroup driver to use...
	I0429 12:58:40.337209  876497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:58:40.355818  876497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:58:40.371448  876497 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:58:40.371520  876497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:58:40.387986  876497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:58:40.404031  876497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:58:40.575327  876497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:58:40.733824  876497 docker.go:233] disabling docker service ...
	I0429 12:58:40.733921  876497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:58:40.752190  876497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:58:40.767626  876497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:58:40.922535  876497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:58:41.075177  876497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:58:41.090078  876497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:58:41.113300  876497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:58:41.113379  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.125378  876497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:58:41.125452  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.137565  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.149947  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.162213  876497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:58:41.175054  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.187255  876497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.200488  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.213075  876497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:58:41.224002  876497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:58:41.234785  876497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:58:41.382183  876497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:58:44.366488  876497 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.984255379s)
	I0429 12:58:44.366528  876497 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:58:44.366594  876497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:58:44.372536  876497 start.go:562] Will wait 60s for crictl version
	I0429 12:58:44.372606  876497 ssh_runner.go:195] Run: which crictl
	I0429 12:58:44.376918  876497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:58:44.417731  876497 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:58:44.417813  876497 ssh_runner.go:195] Run: crio --version
	I0429 12:58:44.452142  876497 ssh_runner.go:195] Run: crio --version
	I0429 12:58:44.488643  876497 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:58:44.490072  876497 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:58:44.493014  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:44.493427  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:44.493455  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:44.493704  876497 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:58:44.499327  876497 kubeadm.go:877] updating cluster {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:58:44.499532  876497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:58:44.499597  876497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:58:44.556469  876497 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:58:44.556505  876497 crio.go:433] Images already preloaded, skipping extraction
	I0429 12:58:44.556586  876497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:58:44.596784  876497 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:58:44.596814  876497 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:58:44.596825  876497 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.30.0 crio true true} ...
	I0429 12:58:44.596945  876497 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:58:44.597018  876497 ssh_runner.go:195] Run: crio config
	I0429 12:58:44.646276  876497 cni.go:84] Creating CNI manager for ""
	I0429 12:58:44.646302  876497 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 12:58:44.646314  876497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:58:44.646348  876497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-212075 NodeName:ha-212075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:58:44.646515  876497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-212075"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:58:44.646541  876497 kube-vip.go:111] generating kube-vip config ...
	I0429 12:58:44.646600  876497 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:58:44.659263  876497 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:58:44.659421  876497 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:58:44.659499  876497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:58:44.670248  876497 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:58:44.670328  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:58:44.681450  876497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:58:44.700953  876497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:58:44.720535  876497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 12:58:44.740624  876497 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 12:58:44.761676  876497 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:58:44.766352  876497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:58:44.914881  876497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:58:44.932810  876497 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.97
	I0429 12:58:44.932837  876497 certs.go:194] generating shared ca certs ...
	I0429 12:58:44.932865  876497 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:58:44.933021  876497 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:58:44.933063  876497 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:58:44.933072  876497 certs.go:256] generating profile certs ...
	I0429 12:58:44.933174  876497 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:58:44.933204  876497 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add
	I0429 12:58:44.933221  876497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.36 192.168.39.109 192.168.39.254]
	I0429 12:58:45.021686  876497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add ...
	I0429 12:58:45.021731  876497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add: {Name:mkda5cf7a551c10d59d01499fd8843801e13ca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:58:45.021929  876497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add ...
	I0429 12:58:45.021942  876497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add: {Name:mk9cfa1f4f18d14688e73084dd18bea565efeb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:58:45.022012  876497 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:58:45.022167  876497 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:58:45.022296  876497 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:58:45.022313  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:58:45.022326  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:58:45.022339  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:58:45.022350  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:58:45.022362  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:58:45.022372  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:58:45.022381  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:58:45.022391  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:58:45.022442  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:58:45.022479  876497 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:58:45.022488  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:58:45.022508  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:58:45.022530  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:58:45.022551  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:58:45.022586  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:58:45.022611  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.022625  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.022639  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.023431  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:58:45.053176  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:58:45.113644  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:58:45.213487  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:58:45.268507  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 12:58:45.300357  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:58:45.341025  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:58:45.389050  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:58:45.439204  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:58:45.473879  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:58:45.534205  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:58:45.593487  876497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:58:45.617841  876497 ssh_runner.go:195] Run: openssl version
	I0429 12:58:45.625208  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:58:45.639145  876497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.645289  876497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.645362  876497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.652141  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:58:45.662642  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:58:45.675557  876497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.681110  876497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.681198  876497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.687881  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:58:45.698363  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:58:45.710392  876497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.715399  876497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.715470  876497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.722054  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:58:45.736227  876497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:58:45.743635  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 12:58:45.755675  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 12:58:45.762158  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 12:58:45.770383  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 12:58:45.777331  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 12:58:45.784427  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 12:58:45.791401  876497 kubeadm.go:391] StartCluster: {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:58:45.791564  876497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 12:58:45.791632  876497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 12:58:45.842472  876497 cri.go:89] found id: "d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf"
	I0429 12:58:45.842496  876497 cri.go:89] found id: "aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3"
	I0429 12:58:45.842499  876497 cri.go:89] found id: "af3e12529064e8cb13b80b365a008f82407a7550d1a33ab25a6ced09b9cdaebd"
	I0429 12:58:45.842502  876497 cri.go:89] found id: "5a8ef3d0d8f3019d5301958b70e597d258e43311b86fd5735f6f519d7eda183e"
	I0429 12:58:45.842505  876497 cri.go:89] found id: "127376dce0d17a01837d92104efb9f706143a8043ae9f7dd72e0f9e8471f1992"
	I0429 12:58:45.842508  876497 cri.go:89] found id: "161197324cadc877ec57a18139e26d918b0f6b141d1995f3917c73b97604b834"
	I0429 12:58:45.842511  876497 cri.go:89] found id: "8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad"
	I0429 12:58:45.842513  876497 cri.go:89] found id: "a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6"
	I0429 12:58:45.842515  876497 cri.go:89] found id: "7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f"
	I0429 12:58:45.842522  876497 cri.go:89] found id: "ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d"
	I0429 12:58:45.842524  876497 cri.go:89] found id: "0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523"
	I0429 12:58:45.842527  876497 cri.go:89] found id: "220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf"
	I0429 12:58:45.842529  876497 cri.go:89] found id: "382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d"
	I0429 12:58:45.842532  876497 cri.go:89] found id: "e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16"
	I0429 12:58:45.842537  876497 cri.go:89] found id: "6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab"
	I0429 12:58:45.842542  876497 cri.go:89] found id: ""
	I0429 12:58:45.842601  876497 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.136177205Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-rcq9m,Uid:de803f70-5f57-4282-af1e-47845231d712,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395564317753510,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T12:50:34.028040167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-212075,Uid:df78e2aba17ba8d18dc89fa959ae7081,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1714395545557522577,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{kubernetes.io/config.hash: df78e2aba17ba8d18dc89fa959ae7081,kubernetes.io/config.seen: 2024-04-29T12:58:44.720331053Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-x299s,Uid:441b065a-2b42-4ac5-889e-c18200f43691,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530602261480,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-29T12:48:09.684265045Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-212075,Uid:f99ab7da4da37d23a0aa069a82f24c8c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530563754293,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.97:8443,kubernetes.io/config.hash: f99ab7da4da37d23a0aa069a82f24c8c,kubernetes.io/config.seen: 2024-04-29T12:47:53.110711957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:66e2d2b6-b
f65-4b8a-ba39-9c99a83f633e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530558630698,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\
"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-29T12:48:09.701413137Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&PodSandboxMetadata{Name:etcd-ha-212075,Uid:93f41646a8b279b2bde6d2412bfb785c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530544499283,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.97:2379,kubernetes.io/config.hash: 93f41646a8b279b2bde6d2412bfb785c,kubernetes.io/config.seen: 2024-04-29T12:47:53.110708008Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3948dd1b6cd6561d61b4
b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&PodSandboxMetadata{Name:kube-proxy-ncdsk,Uid:632757a3-fa64-4483-af75-828e292ce184,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530540169502,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T12:48:06.914589455Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-212075,Uid:83e05ac5498423338e4375f7ce45dcdf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530528795045,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 83e05ac5498423338e4375f7ce45dcdf,kubernetes.io/config.seen: 2024-04-29T12:47:53.110713356Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-212075,Uid:1ef10b09e6abe0d5e22898bbab1b91b6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395530513600549,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1ef10b09e6abe0d5e22898bbab1b91b6,kubernetes.io/config.seen: 2024-04-29T12:47:53.1107143
16Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-c2t8g,Uid:343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395525139118495,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T12:48:09.695952096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&PodSandboxMetadata{Name:kindnet-vnw75,Uid:d7b71f12-5d80-4c41-ae97-a4d7e023ec98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714395525080024038,Labels:map[string]string{app: kindnet,controller-
revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T12:48:06.927821015Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4b79f5b6-ffb8-4396-878e-c534666cb0c2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.136904685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a2b299a-278e-4502-9411-c51ec826750c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.136960174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a2b299a-278e-4502-9411-c51ec826750c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.137433359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:12e873d6644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort
\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a2b299a-278e-4502-9411-c51ec826750c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.150428099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4b9d3bb-0067-46b4-985b-1368fa1aa6a7 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.150500442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4b9d3bb-0067-46b4-985b-1368fa1aa6a7 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.152068609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51736376-b265-4a59-9d49-f399fe377ed2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.153161798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395715153129037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51736376-b265-4a59-9d49-f399fe377ed2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.154067784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fbe419f-3ae5-4229-a35e-9abfa5aeb296 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.154147376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fbe419f-3ae5-4229-a35e-9abfa5aeb296 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.154579084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fbe419f-3ae5-4229-a35e-9abfa5aeb296 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.202332531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=829610cd-ee04-4188-9527-8003b42ca497 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.202416154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=829610cd-ee04-4188-9527-8003b42ca497 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.203845817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d905cc72-8e52-4a09-a075-8ef496ee213a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.204522425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395715204241716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d905cc72-8e52-4a09-a075-8ef496ee213a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.205069505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd833101-37da-4a71-a89f-34b22bb48fb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.205123072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd833101-37da-4a71-a89f-34b22bb48fb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.205557239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd833101-37da-4a71-a89f-34b22bb48fb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.256566087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb7e4078-3439-402b-bb73-dd552b282bc9 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.257308028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb7e4078-3439-402b-bb73-dd552b282bc9 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.259830284Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b30e7b3-24ec-4f86-95df-20d271a881c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.260568310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395715260534863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b30e7b3-24ec-4f86-95df-20d271a881c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.261716716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b1f5be7-cc0e-4b9e-a433-38c75caa58fe name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.261896943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b1f5be7-cc0e-4b9e-a433-38c75caa58fe name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:01:55 ha-212075 crio[3787]: time="2024-04-29 13:01:55.262347128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b1f5be7-cc0e-4b9e-a433-38c75caa58fe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	38b45c6d5e056       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   10bfd9ce78ec8       storage-provisioner
	de07ba1aa2df4       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   cb8f4bd32e3df       kindnet-vnw75
	b6086e564f79a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Running             kube-controller-manager   2                   4c603040e7937       kube-controller-manager-ha-212075
	745e6582bceda       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Running             kube-apiserver            3                   24576b10ad16b       kube-apiserver-ha-212075
	d42656388820e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   10bfd9ce78ec8       storage-provisioner
	d83af75676bbe       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   a6ffea1acb446       busybox-fc5497c4f-rcq9m
	0861c5f41887f       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   423790ec30aaa       kube-vip-ha-212075
	12e873d6644ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   c36be2c0ab43c       coredns-7db6d8ff4d-x299s
	38b6240c50d49       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   37bb5572bfa3b       etcd-ha-212075
	68930ae1a81a7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago        Running             kube-proxy                1                   3948dd1b6cd65       kube-proxy-ncdsk
	f5a8e5fbbfe64       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago        Exited              kube-apiserver            2                   24576b10ad16b       kube-apiserver-ha-212075
	41876051108d0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago        Exited              kube-controller-manager   1                   4c603040e7937       kube-controller-manager-ha-212075
	47ff59770e077       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago        Running             kube-scheduler            1                   f4c3605d2255d       kube-scheduler-ha-212075
	d94e4c884e2e5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               2                   cb8f4bd32e3df       kindnet-vnw75
	aa2b53bbde63e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   69284408d503f       coredns-7db6d8ff4d-c2t8g
	6079fd69c4d07       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   377fa41dd93a5       busybox-fc5497c4f-rcq9m
	8923eb9969f74       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   e0bec542cd689       coredns-7db6d8ff4d-c2t8g
	a7bedc2be5698       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   cb2c23b3b3b1c       coredns-7db6d8ff4d-x299s
	ae027e60b2a1e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   84bca27dac841       kube-proxy-ncdsk
	220538e592762       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   258b9f1c2d733       kube-scheduler-ha-212075
	6ba91c742f08c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   814df27c007a6       etcd-ha-212075
	
	
	==> coredns [12e873d6644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39830->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39830->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35226->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35226->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39814->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1891097803]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 12:59:05.609) (total time: 10680ms):
	Trace[1891097803]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39814->10.96.0.1:443: read: connection reset by peer 10680ms (12:59:16.289)
	Trace[1891097803]: [10.680502529s] [10.680502529s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39814->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad] <==
	[INFO] 10.244.0.4:39294 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002366491s
	[INFO] 10.244.0.4:47691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095562s
	[INFO] 10.244.0.4:49991 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146136s
	[INFO] 10.244.0.4:45880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133788s
	[INFO] 10.244.2.2:40297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017628s
	[INFO] 10.244.2.2:44282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001974026s
	[INFO] 10.244.2.2:48058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199321s
	[INFO] 10.244.2.2:50097 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220995s
	[INFO] 10.244.2.2:60877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132114s
	[INFO] 10.244.2.2:38824 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114121s
	[INFO] 10.244.1.2:60691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192262s
	[INFO] 10.244.1.2:51664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123427s
	[INFO] 10.244.1.2:57326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156295s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105493s
	[INFO] 10.244.0.4:39454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248959s
	[INFO] 10.244.2.2:56559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010789s
	[INFO] 10.244.1.2:57860 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144445s
	[INFO] 10.244.1.2:40470 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145332s
	[INFO] 10.244.0.4:35067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124783s
	[INFO] 10.244.2.2:47889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150138s
	[INFO] 10.244.2.2:60310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091159s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1858&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1864&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6] <==
	[INFO] 10.244.1.2:43604 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158954s
	[INFO] 10.244.0.4:58453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127934s
	[INFO] 10.244.0.4:52484 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001277738s
	[INFO] 10.244.0.4:47770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128102s
	[INFO] 10.244.0.4:53060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103039s
	[INFO] 10.244.2.2:55991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854135s
	[INFO] 10.244.2.2:33533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090157s
	[INFO] 10.244.1.2:52893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090867s
	[INFO] 10.244.0.4:54479 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102901s
	[INFO] 10.244.0.4:53525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000359828s
	[INFO] 10.244.2.2:57755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000264423s
	[INFO] 10.244.2.2:47852 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118616s
	[INFO] 10.244.2.2:38289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112347s
	[INFO] 10.244.1.2:55092 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184788s
	[INFO] 10.244.1.2:52235 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146353s
	[INFO] 10.244.0.4:55598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137209s
	[INFO] 10.244.0.4:54649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121493s
	[INFO] 10.244.0.4:50694 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136791s
	[INFO] 10.244.2.2:49177 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104896s
	[INFO] 10.244.2.2:41037 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088839s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[5625112]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 12:58:57.212) (total time: 10001ms):
	Trace[5625112]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:59:07.214)
	Trace[5625112]: [10.001578514s] [10.001578514s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35316->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35316->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-212075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_54_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:48:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-212075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eefe9cc034f74464a919edd5f6b61c2b
	  System UUID:                eefe9cc0-34f7-4464-a919-edd5f6b61c2b
	  Boot ID:                    20b6e47d-4696-4b2a-ba7c-62e73184f5c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rcq9m              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-c2t8g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-x299s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-212075                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-vnw75                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-212075             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-212075    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-ncdsk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-212075             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-212075                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m22s              kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-212075 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Warning  ContainerGCFailed        4m2s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   RegisteredNode           2m7s               node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   RegisteredNode           26s                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	
	
	Name:               ha-212075-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_49_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:48:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:00:17 +0000   Mon, 29 Apr 2024 12:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:00:17 +0000   Mon, 29 Apr 2024 12:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:00:17 +0000   Mon, 29 Apr 2024 12:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:00:17 +0000   Mon, 29 Apr 2024 12:59:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    ha-212075-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 088c5f79339047d6aaf2c88397c97942
	  System UUID:                088c5f79-3390-47d6-aaf2-c88397c97942
	  Boot ID:                    22594bb6-fb8b-4284-9426-944febb4fe41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9q8rf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-212075-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-sx2zd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-212075-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-212075-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-sfmhh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-212075-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-212075-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m18s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-212075-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-212075-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-212075-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  NodeNotReady             9m33s                  node-controller  Node ha-212075-m02 status is now: NodeNotReady
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node ha-212075-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x7 over 2m47s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           26s                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	
	
	Name:               ha-212075-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_50_12_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:50:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:01:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:01:26 +0000   Mon, 29 Apr 2024 13:00:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:01:26 +0000   Mon, 29 Apr 2024 13:00:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:01:26 +0000   Mon, 29 Apr 2024 13:00:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:01:26 +0000   Mon, 29 Apr 2024 13:00:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-212075-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 535ef7f0e3c949a7801d0ab8f3e70b91
	  System UUID:                535ef7f0-e3c9-49a7-801d-0ab8f3e70b91
	  Boot ID:                    66958c39-05df-47d0-a7b5-4ba23c767bd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xw452                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-212075-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-2d8zp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-212075-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-212075-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-c27wn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-212075-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-212075-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-212075-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal   RegisteredNode           2m7s               node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	  Normal   NodeNotReady             94s                node-controller  Node ha-212075-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 60s (x2 over 60s)  kubelet          Node ha-212075-m03 has been rebooted, boot id: 66958c39-05df-47d0-a7b5-4ba23c767bd7
	  Normal   NodeHasSufficientMemory  60s (x3 over 60s)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x3 over 60s)  kubelet          Node ha-212075-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x3 over 60s)  kubelet          Node ha-212075-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             60s                kubelet          Node ha-212075-m03 status is now: NodeNotReady
	  Normal   NodeReady                60s                kubelet          Node ha-212075-m03 status is now: NodeReady
	  Normal   RegisteredNode           26s                node-controller  Node ha-212075-m03 event: Registered Node ha-212075-m03 in Controller
	
	
	Name:               ha-212075-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_51_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:01:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:01:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:01:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:01:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:01:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-212075-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee58aa83584b463285f294fa28d19e05
	  System UUID:                ee58aa83-584b-4632-85f2-94fa28d19e05
	  Boot ID:                    9a189d51-10ab-493c-a851-4fd7b23ecd1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d6tbw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-bnbr8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-212075-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-212075-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   RegisteredNode           2m7s               node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   NodeNotReady             94s                node-controller  Node ha-212075-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           26s                node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-212075-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-212075-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-212075-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-212075-m04 has been rebooted, boot id: 9a189d51-10ab-493c-a851-4fd7b23ecd1e
	  Normal   NodeReady                9s                 kubelet          Node ha-212075-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.473754] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.066120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063959] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.171013] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136787] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.290881] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.567542] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.067175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.833264] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +1.214053] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.340857] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.082766] kauditd_printk_skb: 40 callbacks suppressed
	[Apr29 12:48] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.082460] kauditd_printk_skb: 72 callbacks suppressed
	[Apr29 12:58] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.174026] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.186334] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +0.148076] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.314140] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +3.531271] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.908485] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.004807] kauditd_printk_skb: 2 callbacks suppressed
	[Apr29 12:59] kauditd_printk_skb: 58 callbacks suppressed
	[  +9.062141] kauditd_printk_skb: 1 callbacks suppressed
	[ +28.166402] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb] <==
	{"level":"warn","ts":"2024-04-29T13:00:51.776973Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.109:2380/version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:51.77705Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:51.945009Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:51.945094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:55.778931Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.109:2380/version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:55.779009Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:56.946234Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:56.946363Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:59.780771Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.109:2380/version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:00:59.780895Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:01.946492Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:01.946513Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:03.783734Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.109:2380/version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:03.78386Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:06.947064Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:06.947245Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e28237f435b7165","rtt":"0s","error":"dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:07.786407Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.109:2380/version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T13:01:07.786573Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e28237f435b7165","error":"Get \"https://192.168.39.109:2380/version\": dial tcp 192.168.39.109:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-29T13:01:10.587618Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:01:10.587791Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:01:10.602242Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:01:10.636476Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"e28237f435b7165","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-29T13:01:10.636536Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:01:10.645078Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"e28237f435b7165","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T13:01:10.645371Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	
	
	==> etcd [6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab] <==
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.884393Z","caller":"traceutil/trace.go:171","msg":"trace[2076736015] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"8.455297133s","start":"2024-04-29T12:57:00.429093Z","end":"2024-04-29T12:57:08.88439Z","steps":["trace[2076736015] 'agreement among raft nodes before linearized reading'  (duration: 8.455290936s)"],"step_count":1}
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.884425Z","caller":"traceutil/trace.go:171","msg":"trace[494929290] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; }","duration":"7.397452906s","start":"2024-04-29T12:57:01.486965Z","end":"2024-04-29T12:57:08.884418Z","steps":["trace[494929290] 'agreement among raft nodes before linearized reading'  (duration: 7.397441637s)"],"step_count":1}
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.88321Z","caller":"traceutil/trace.go:171","msg":"trace[1933475723] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"7.146336445s","start":"2024-04-29T12:57:01.736871Z","end":"2024-04-29T12:57:08.883208Z","steps":["trace[1933475723] 'agreement among raft nodes before linearized reading'  (duration: 7.146332305s)"],"step_count":1}
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.912543Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"f61fae125a956d36","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-29T12:57:08.91285Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.912903Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.912964Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913027Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.9131Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913199Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913236Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913263Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.91331Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.91337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913445Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913513Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913621Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913842Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.917921Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-29T12:57:08.918145Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-29T12:57:08.918193Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-212075","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> kernel <==
	 13:01:56 up 14 min,  0 users,  load average: 0.28, 0.28, 0.20
	Linux ha-212075 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf] <==
	I0429 12:58:45.888374       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 12:58:45.888435       1 main.go:107] hostIP = 192.168.39.97
	podIP = 192.168.39.97
	I0429 12:58:45.889142       1 main.go:116] setting mtu 1500 for CNI 
	I0429 12:58:45.889170       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 12:58:45.889194       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 12:58:46.280219       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 12:58:48.641119       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 12:58:51.713633       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 12:59:03.722194       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0429 12:59:13.248979       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.239:58660->10.96.0.1:443: read: connection reset by peer
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.239:58660->10.96.0.1:443: read: connection reset by peer
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a] <==
	I0429 13:01:21.103529       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:01:31.124025       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:01:31.124213       1 main.go:227] handling current node
	I0429 13:01:31.124260       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:01:31.124288       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:01:31.124493       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 13:01:31.124549       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 13:01:31.124797       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:01:31.124838       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:01:41.145138       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:01:41.145192       1 main.go:227] handling current node
	I0429 13:01:41.145211       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:01:41.145217       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:01:41.145330       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 13:01:41.145336       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 13:01:41.145466       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:01:41.145493       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:01:51.162132       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:01:51.162188       1 main.go:227] handling current node
	I0429 13:01:51.162209       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:01:51.162218       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:01:51.162403       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0429 13:01:51.162439       1 main.go:250] Node ha-212075-m03 has CIDR [10.244.2.0/24] 
	I0429 13:01:51.162515       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:01:51.162525       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7] <==
	I0429 12:59:36.085392       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 12:59:36.085441       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 12:59:36.163871       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 12:59:36.168608       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 12:59:36.173233       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 12:59:36.173267       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 12:59:36.175737       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 12:59:36.175821       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 12:59:36.175827       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 12:59:36.175925       1 aggregator.go:165] initial CRD sync complete...
	I0429 12:59:36.175945       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 12:59:36.175950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 12:59:36.175955       1 cache.go:39] Caches are synced for autoregister controller
	I0429 12:59:36.192539       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 12:59:36.198637       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 12:59:36.202433       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 12:59:36.202502       1 policy_source.go:224] refreshing policies
	W0429 12:59:36.226973       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.36]
	I0429 12:59:36.228475       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 12:59:36.242387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0429 12:59:36.251731       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0429 12:59:36.292013       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 12:59:37.070264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 12:59:37.584379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.36 192.168.39.97]
	W0429 12:59:47.581966       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.36 192.168.39.97]
	
	
	==> kube-apiserver [f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7] <==
	I0429 12:58:51.593850       1 options.go:221] external host was not specified, using 192.168.39.97
	I0429 12:58:51.595825       1 server.go:148] Version: v1.30.0
	I0429 12:58:51.595921       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:58:52.219138       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0429 12:58:52.233868       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 12:58:52.239867       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 12:58:52.239934       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 12:58:52.240152       1 instance.go:299] Using reconciler: lease
	W0429 12:59:12.211981       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0429 12:59:12.212041       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0429 12:59:12.241559       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361] <==
	I0429 12:58:51.950198       1 serving.go:380] Generated self-signed cert in-memory
	I0429 12:58:52.604116       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 12:58:52.604159       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:58:52.605975       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 12:58:52.606092       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 12:58:52.606188       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 12:58:52.606370       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0429 12:59:13.249485       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.97:8443/healthz\": dial tcp 192.168.39.97:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be] <==
	I0429 12:59:48.722847       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 12:59:48.725253       1 shared_informer.go:320] Caches are synced for GC
	I0429 12:59:48.733441       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 12:59:48.734361       1 shared_informer.go:320] Caches are synced for PV protection
	I0429 12:59:48.753755       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 12:59:48.792401       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 12:59:48.891693       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 12:59:48.927601       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 12:59:49.301101       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 12:59:49.301146       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 12:59:49.355302       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 12:59:50.058067       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gzzk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gzzk5\": the object has been modified; please apply your changes to the latest version and try again"
	I0429 12:59:50.058352       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"92582cfd-0c67-4b63-a898-feef05c3240f", APIVersion:"v1", ResourceVersion:"285", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gzzk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gzzk5": the object has been modified; please apply your changes to the latest version and try again
	I0429 12:59:50.084124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.000098ms"
	I0429 12:59:50.084256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.39µs"
	I0429 13:00:20.049473       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gzzk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gzzk5\": the object has been modified; please apply your changes to the latest version and try again"
	I0429 13:00:20.049982       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"92582cfd-0c67-4b63-a898-feef05c3240f", APIVersion:"v1", ResourceVersion:"285", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gzzk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gzzk5": the object has been modified; please apply your changes to the latest version and try again
	I0429 13:00:20.110414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.640528ms"
	I0429 13:00:20.110717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="187.408µs"
	I0429 13:00:21.114722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.213082ms"
	I0429 13:00:21.114831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.866µs"
	I0429 13:00:56.668314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.177µs"
	I0429 13:01:13.791393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.568512ms"
	I0429 13:01:13.791764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.215µs"
	I0429 13:01:46.883587       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-212075-m04"
	
	
	==> kube-proxy [68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a] <==
	I0429 12:58:52.696573       1 server_linux.go:69] "Using iptables proxy"
	E0429 12:58:53.249319       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:58:56.322127       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:58:59.393763       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:59:05.537128       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:59:14.753595       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0429 12:59:32.973597       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0429 12:59:33.057634       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:59:33.057741       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:59:33.057760       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:59:33.065789       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:59:33.066577       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:59:33.066612       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:59:33.069132       1 config.go:192] "Starting service config controller"
	I0429 12:59:33.069175       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:59:33.069417       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:59:33.069424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:59:33.070929       1 config.go:319] "Starting node config controller"
	I0429 12:59:33.070959       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:59:33.169434       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:59:33.169765       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 12:59:33.171482       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d] <==
	E0429 12:55:58.148221       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:01.217346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:01.217549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:01.217735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:01.217795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:04.289341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:04.290001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:07.362339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:07.362643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:07.362557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:07.362737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:10.433935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:10.434131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:16.579827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:16.580061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:19.651953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:19.652269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:22.722770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:22.722999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:38.083091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:38.083202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:38.083436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:38.083560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:41.154641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:41.155293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf] <==
	W0429 12:57:05.016269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:57:05.016415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:57:05.065153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 12:57:05.065253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 12:57:05.116701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:57:05.116749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:57:05.141571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:57:05.141717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:57:05.217620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:57:05.217746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:57:05.228465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:57:05.228537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:57:05.257487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:57:05.257636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:57:05.577452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:57:05.577557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 12:57:05.676718       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:57:05.676811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:57:05.716424       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:57:05.716581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 12:57:06.031254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:57:06.031372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:57:08.388157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:57:08.388256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:57:08.829244       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe] <==
	W0429 12:59:29.509515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.97:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:29.509591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.97:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:30.414633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.97:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:30.414807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.97:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:31.421593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.97:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:31.421644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.97:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.224096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.97:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.224163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.97:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.375354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.375473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.460486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.97:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.460626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.97:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.950832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.950967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:33.073910       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.97:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:33.073971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.97:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:33.198249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:33.198429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:36.169385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:59:36.169419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:59:36.169476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 12:59:36.172621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 12:59:36.177235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:59:36.177443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0429 12:59:50.456414       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 12:59:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:59:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:59:54 ha-212075 kubelet[1362]: I0429 12:59:54.521027    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-rcq9m" podStartSLOduration=560.813734291 podStartE2EDuration="9m21.520989038s" podCreationTimestamp="2024-04-29 12:50:33 +0000 UTC" firstStartedPulling="2024-04-29 12:50:34.587205626 +0000 UTC m=+161.574699882" lastFinishedPulling="2024-04-29 12:50:35.29446037 +0000 UTC m=+162.281954629" observedRunningTime="2024-04-29 12:50:35.987998802 +0000 UTC m=+162.975493078" watchObservedRunningTime="2024-04-29 12:59:54.520989038 +0000 UTC m=+721.508483314"
	Apr 29 12:59:57 ha-212075 kubelet[1362]: I0429 12:59:57.134963    1362 scope.go:117] "RemoveContainer" containerID="d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b"
	Apr 29 12:59:57 ha-212075 kubelet[1362]: E0429 12:59:57.135249    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(66e2d2b6-bf65-4b8a-ba39-9c99a83f633e)\"" pod="kube-system/storage-provisioner" podUID="66e2d2b6-bf65-4b8a-ba39-9c99a83f633e"
	Apr 29 12:59:57 ha-212075 kubelet[1362]: I0429 12:59:57.137098    1362 scope.go:117] "RemoveContainer" containerID="d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf"
	Apr 29 12:59:57 ha-212075 kubelet[1362]: E0429 12:59:57.137352    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vnw75_kube-system(d7b71f12-5d80-4c41-ae97-a4d7e023ec98)\"" pod="kube-system/kindnet-vnw75" podUID="d7b71f12-5d80-4c41-ae97-a4d7e023ec98"
	Apr 29 13:00:10 ha-212075 kubelet[1362]: I0429 13:00:10.134905    1362 scope.go:117] "RemoveContainer" containerID="d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf"
	Apr 29 13:00:11 ha-212075 kubelet[1362]: I0429 13:00:11.134975    1362 scope.go:117] "RemoveContainer" containerID="d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b"
	Apr 29 13:00:11 ha-212075 kubelet[1362]: E0429 13:00:11.135223    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(66e2d2b6-bf65-4b8a-ba39-9c99a83f633e)\"" pod="kube-system/storage-provisioner" podUID="66e2d2b6-bf65-4b8a-ba39-9c99a83f633e"
	Apr 29 13:00:25 ha-212075 kubelet[1362]: I0429 13:00:25.134463    1362 scope.go:117] "RemoveContainer" containerID="d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b"
	Apr 29 13:00:36 ha-212075 kubelet[1362]: I0429 13:00:36.134583    1362 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-212075" podUID="44e6d402-7c09-4c33-9905-15f9d4a29381"
	Apr 29 13:00:36 ha-212075 kubelet[1362]: I0429 13:00:36.156963    1362 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-212075"
	Apr 29 13:00:36 ha-212075 kubelet[1362]: I0429 13:00:36.506593    1362 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-212075" podUID="44e6d402-7c09-4c33-9905-15f9d4a29381"
	Apr 29 13:00:43 ha-212075 kubelet[1362]: I0429 13:00:43.159099    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-212075" podStartSLOduration=7.159070499 podStartE2EDuration="7.159070499s" podCreationTimestamp="2024-04-29 13:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:00:43.158903899 +0000 UTC m=+770.146398177" watchObservedRunningTime="2024-04-29 13:00:43.159070499 +0000 UTC m=+770.146564776"
	Apr 29 13:00:53 ha-212075 kubelet[1362]: E0429 13:00:53.152946    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:00:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:00:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:00:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:00:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:01:53 ha-212075 kubelet[1362]: E0429 13:01:53.154265    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:01:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:01:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:01:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:01:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:01:54.759494  877965 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18773-847310/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-212075 -n ha-212075
helpers_test.go:261: (dbg) Run:  kubectl --context ha-212075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 stop -v=7 --alsologtostderr: exit status 82 (2m0.528112162s)

                                                
                                                
-- stdout --
	* Stopping node "ha-212075-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:02:15.152517  878372 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:02:15.152690  878372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:02:15.152700  878372 out.go:304] Setting ErrFile to fd 2...
	I0429 13:02:15.152704  878372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:02:15.152909  878372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:02:15.153180  878372 out.go:298] Setting JSON to false
	I0429 13:02:15.153263  878372 mustload.go:65] Loading cluster: ha-212075
	I0429 13:02:15.153669  878372 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:02:15.153763  878372 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 13:02:15.153949  878372 mustload.go:65] Loading cluster: ha-212075
	I0429 13:02:15.154095  878372 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:02:15.154123  878372 stop.go:39] StopHost: ha-212075-m04
	I0429 13:02:15.154476  878372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:02:15.154519  878372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:02:15.171770  878372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40069
	I0429 13:02:15.172363  878372 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:02:15.173045  878372 main.go:141] libmachine: Using API Version  1
	I0429 13:02:15.173077  878372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:02:15.173466  878372 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:02:15.176322  878372 out.go:177] * Stopping node "ha-212075-m04"  ...
	I0429 13:02:15.177685  878372 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 13:02:15.177743  878372 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 13:02:15.178137  878372 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 13:02:15.178172  878372 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 13:02:15.181321  878372 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 13:02:15.181801  878372 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 14:01:41 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 13:02:15.181838  878372 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 13:02:15.181980  878372 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 13:02:15.182216  878372 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 13:02:15.182439  878372 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 13:02:15.182619  878372 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	I0429 13:02:15.271220  878372 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 13:02:15.326238  878372 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 13:02:15.382801  878372 main.go:141] libmachine: Stopping "ha-212075-m04"...
	I0429 13:02:15.382837  878372 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 13:02:15.384461  878372 main.go:141] libmachine: (ha-212075-m04) Calling .Stop
	I0429 13:02:15.387924  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 0/120
	I0429 13:02:16.390013  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 1/120
	I0429 13:02:17.391547  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 2/120
	I0429 13:02:18.393152  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 3/120
	I0429 13:02:19.394887  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 4/120
	I0429 13:02:20.396976  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 5/120
	I0429 13:02:21.398386  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 6/120
	I0429 13:02:22.399871  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 7/120
	I0429 13:02:23.402046  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 8/120
	I0429 13:02:24.404183  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 9/120
	I0429 13:02:25.406640  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 10/120
	I0429 13:02:26.408183  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 11/120
	I0429 13:02:27.409950  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 12/120
	I0429 13:02:28.411847  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 13/120
	I0429 13:02:29.414445  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 14/120
	I0429 13:02:30.416297  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 15/120
	I0429 13:02:31.417965  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 16/120
	I0429 13:02:32.419336  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 17/120
	I0429 13:02:33.420898  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 18/120
	I0429 13:02:34.422528  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 19/120
	I0429 13:02:35.424804  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 20/120
	I0429 13:02:36.427251  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 21/120
	I0429 13:02:37.428572  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 22/120
	I0429 13:02:38.430051  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 23/120
	I0429 13:02:39.431464  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 24/120
	I0429 13:02:40.433155  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 25/120
	I0429 13:02:41.435080  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 26/120
	I0429 13:02:42.436599  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 27/120
	I0429 13:02:43.438735  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 28/120
	I0429 13:02:44.440183  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 29/120
	I0429 13:02:45.442199  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 30/120
	I0429 13:02:46.443725  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 31/120
	I0429 13:02:47.445384  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 32/120
	I0429 13:02:48.446889  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 33/120
	I0429 13:02:49.448442  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 34/120
	I0429 13:02:50.450537  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 35/120
	I0429 13:02:51.451993  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 36/120
	I0429 13:02:52.454216  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 37/120
	I0429 13:02:53.455716  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 38/120
	I0429 13:02:54.458035  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 39/120
	I0429 13:02:55.460374  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 40/120
	I0429 13:02:56.462066  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 41/120
	I0429 13:02:57.463338  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 42/120
	I0429 13:02:58.465408  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 43/120
	I0429 13:02:59.466627  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 44/120
	I0429 13:03:00.468900  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 45/120
	I0429 13:03:01.470179  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 46/120
	I0429 13:03:02.471484  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 47/120
	I0429 13:03:03.472818  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 48/120
	I0429 13:03:04.473972  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 49/120
	I0429 13:03:05.476094  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 50/120
	I0429 13:03:06.477785  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 51/120
	I0429 13:03:07.479292  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 52/120
	I0429 13:03:08.480765  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 53/120
	I0429 13:03:09.482028  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 54/120
	I0429 13:03:10.484239  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 55/120
	I0429 13:03:11.485865  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 56/120
	I0429 13:03:12.487135  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 57/120
	I0429 13:03:13.489001  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 58/120
	I0429 13:03:14.490423  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 59/120
	I0429 13:03:15.492696  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 60/120
	I0429 13:03:16.494290  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 61/120
	I0429 13:03:17.495762  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 62/120
	I0429 13:03:18.497434  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 63/120
	I0429 13:03:19.499550  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 64/120
	I0429 13:03:20.501295  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 65/120
	I0429 13:03:21.503472  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 66/120
	I0429 13:03:22.505034  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 67/120
	I0429 13:03:23.506687  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 68/120
	I0429 13:03:24.508288  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 69/120
	I0429 13:03:25.510732  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 70/120
	I0429 13:03:26.512345  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 71/120
	I0429 13:03:27.514213  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 72/120
	I0429 13:03:28.515757  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 73/120
	I0429 13:03:29.517283  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 74/120
	I0429 13:03:30.519479  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 75/120
	I0429 13:03:31.521561  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 76/120
	I0429 13:03:32.523287  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 77/120
	I0429 13:03:33.524772  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 78/120
	I0429 13:03:34.526377  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 79/120
	I0429 13:03:35.528598  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 80/120
	I0429 13:03:36.529974  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 81/120
	I0429 13:03:37.531300  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 82/120
	I0429 13:03:38.532692  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 83/120
	I0429 13:03:39.534902  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 84/120
	I0429 13:03:40.536463  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 85/120
	I0429 13:03:41.537927  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 86/120
	I0429 13:03:42.539682  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 87/120
	I0429 13:03:43.541837  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 88/120
	I0429 13:03:44.543978  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 89/120
	I0429 13:03:45.546459  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 90/120
	I0429 13:03:46.547977  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 91/120
	I0429 13:03:47.550071  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 92/120
	I0429 13:03:48.551677  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 93/120
	I0429 13:03:49.553034  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 94/120
	I0429 13:03:50.555081  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 95/120
	I0429 13:03:51.556543  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 96/120
	I0429 13:03:52.558043  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 97/120
	I0429 13:03:53.559442  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 98/120
	I0429 13:03:54.561534  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 99/120
	I0429 13:03:55.562974  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 100/120
	I0429 13:03:56.564518  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 101/120
	I0429 13:03:57.566037  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 102/120
	I0429 13:03:58.568430  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 103/120
	I0429 13:03:59.569741  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 104/120
	I0429 13:04:00.571713  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 105/120
	I0429 13:04:01.573063  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 106/120
	I0429 13:04:02.574801  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 107/120
	I0429 13:04:03.576724  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 108/120
	I0429 13:04:04.578235  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 109/120
	I0429 13:04:05.580321  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 110/120
	I0429 13:04:06.582022  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 111/120
	I0429 13:04:07.583711  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 112/120
	I0429 13:04:08.585932  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 113/120
	I0429 13:04:09.588031  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 114/120
	I0429 13:04:10.590054  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 115/120
	I0429 13:04:11.591475  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 116/120
	I0429 13:04:12.592918  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 117/120
	I0429 13:04:13.594502  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 118/120
	I0429 13:04:14.596679  878372 main.go:141] libmachine: (ha-212075-m04) Waiting for machine to stop 119/120
	I0429 13:04:15.597812  878372 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 13:04:15.597874  878372 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 13:04:15.600474  878372 out.go:177] 
	W0429 13:04:15.602130  878372 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 13:04:15.602151  878372 out.go:239] * 
	* 
	W0429 13:04:15.611600  878372 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 13:04:15.613602  878372 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-212075 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: exit status 3 (19.087464103s)

                                                
                                                
-- stdout --
	ha-212075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-212075-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:04:15.682817  878838 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:04:15.682963  878838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:04:15.682974  878838 out.go:304] Setting ErrFile to fd 2...
	I0429 13:04:15.682978  878838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:04:15.683181  878838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:04:15.683430  878838 out.go:298] Setting JSON to false
	I0429 13:04:15.683464  878838 mustload.go:65] Loading cluster: ha-212075
	I0429 13:04:15.683606  878838 notify.go:220] Checking for updates...
	I0429 13:04:15.683928  878838 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:04:15.683956  878838 status.go:255] checking status of ha-212075 ...
	I0429 13:04:15.684446  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:15.684505  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:15.705596  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0429 13:04:15.706145  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:15.706849  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:15.706883  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:15.707245  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:15.707512  878838 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 13:04:15.709145  878838 status.go:330] ha-212075 host status = "Running" (err=<nil>)
	I0429 13:04:15.709164  878838 host.go:66] Checking if "ha-212075" exists ...
	I0429 13:04:15.709461  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:15.709526  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:15.725615  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I0429 13:04:15.726164  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:15.726670  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:15.726698  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:15.727043  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:15.727259  878838 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 13:04:15.729975  878838 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 13:04:15.730408  878838 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 13:04:15.730431  878838 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 13:04:15.730590  878838 host.go:66] Checking if "ha-212075" exists ...
	I0429 13:04:15.730913  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:15.730962  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:15.747583  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I0429 13:04:15.748089  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:15.748698  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:15.748730  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:15.749069  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:15.749253  878838 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 13:04:15.749461  878838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:15.749502  878838 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 13:04:15.752493  878838 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 13:04:15.752999  878838 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 13:04:15.753037  878838 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 13:04:15.753204  878838 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 13:04:15.753412  878838 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 13:04:15.753576  878838 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 13:04:15.753787  878838 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 13:04:15.845317  878838 ssh_runner.go:195] Run: systemctl --version
	I0429 13:04:15.853646  878838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:04:15.872026  878838 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 13:04:15.872061  878838 api_server.go:166] Checking apiserver status ...
	I0429 13:04:15.872097  878838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:04:15.889344  878838 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5095/cgroup
	W0429 13:04:15.901095  878838 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5095/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:04:15.901178  878838 ssh_runner.go:195] Run: ls
	I0429 13:04:15.907280  878838 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 13:04:15.911843  878838 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 13:04:15.911880  878838 status.go:422] ha-212075 apiserver status = Running (err=<nil>)
	I0429 13:04:15.911901  878838 status.go:257] ha-212075 status: &{Name:ha-212075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:04:15.911929  878838 status.go:255] checking status of ha-212075-m02 ...
	I0429 13:04:15.912274  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:15.912322  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:15.928302  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0429 13:04:15.928809  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:15.929330  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:15.929352  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:15.929689  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:15.929903  878838 main.go:141] libmachine: (ha-212075-m02) Calling .GetState
	I0429 13:04:15.931491  878838 status.go:330] ha-212075-m02 host status = "Running" (err=<nil>)
	I0429 13:04:15.931513  878838 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 13:04:15.931823  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:15.931862  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:15.950392  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0429 13:04:15.950951  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:15.951544  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:15.951580  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:15.951952  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:15.952127  878838 main.go:141] libmachine: (ha-212075-m02) Calling .GetIP
	I0429 13:04:15.955331  878838 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 13:04:15.955767  878838 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:58:57 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 13:04:15.955797  878838 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 13:04:15.956004  878838 host.go:66] Checking if "ha-212075-m02" exists ...
	I0429 13:04:15.956393  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:15.956447  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:15.972159  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0429 13:04:15.972591  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:15.973106  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:15.973132  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:15.973521  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:15.973739  878838 main.go:141] libmachine: (ha-212075-m02) Calling .DriverName
	I0429 13:04:15.973945  878838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:15.973988  878838 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHHostname
	I0429 13:04:15.976815  878838 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 13:04:15.977240  878838 main.go:141] libmachine: (ha-212075-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:f4:9a", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:58:57 +0000 UTC Type:0 Mac:52:54:00:46:f4:9a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-212075-m02 Clientid:01:52:54:00:46:f4:9a}
	I0429 13:04:15.977295  878838 main.go:141] libmachine: (ha-212075-m02) DBG | domain ha-212075-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:46:f4:9a in network mk-ha-212075
	I0429 13:04:15.977382  878838 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHPort
	I0429 13:04:15.977560  878838 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHKeyPath
	I0429 13:04:15.977719  878838 main.go:141] libmachine: (ha-212075-m02) Calling .GetSSHUsername
	I0429 13:04:15.977850  878838 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m02/id_rsa Username:docker}
	I0429 13:04:16.061830  878838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:04:16.082469  878838 kubeconfig.go:125] found "ha-212075" server: "https://192.168.39.254:8443"
	I0429 13:04:16.082512  878838 api_server.go:166] Checking apiserver status ...
	I0429 13:04:16.082557  878838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:04:16.101954  878838 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W0429 13:04:16.113866  878838 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:04:16.113950  878838 ssh_runner.go:195] Run: ls
	I0429 13:04:16.118889  878838 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 13:04:16.123340  878838 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 13:04:16.123382  878838 status.go:422] ha-212075-m02 apiserver status = Running (err=<nil>)
	I0429 13:04:16.123395  878838 status.go:257] ha-212075-m02 status: &{Name:ha-212075-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:04:16.123413  878838 status.go:255] checking status of ha-212075-m04 ...
	I0429 13:04:16.123829  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:16.123878  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:16.140226  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0429 13:04:16.140741  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:16.141300  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:16.141322  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:16.141657  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:16.141825  878838 main.go:141] libmachine: (ha-212075-m04) Calling .GetState
	I0429 13:04:16.143506  878838 status.go:330] ha-212075-m04 host status = "Running" (err=<nil>)
	I0429 13:04:16.143530  878838 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 13:04:16.143861  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:16.143902  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:16.159851  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0429 13:04:16.160530  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:16.161164  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:16.161197  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:16.161565  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:16.161772  878838 main.go:141] libmachine: (ha-212075-m04) Calling .GetIP
	I0429 13:04:16.164680  878838 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 13:04:16.165081  878838 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 14:01:41 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 13:04:16.165098  878838 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 13:04:16.165382  878838 host.go:66] Checking if "ha-212075-m04" exists ...
	I0429 13:04:16.165818  878838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:04:16.165876  878838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:04:16.181531  878838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0429 13:04:16.182075  878838 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:04:16.182593  878838 main.go:141] libmachine: Using API Version  1
	I0429 13:04:16.182619  878838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:04:16.183072  878838 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:04:16.183494  878838 main.go:141] libmachine: (ha-212075-m04) Calling .DriverName
	I0429 13:04:16.183749  878838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:16.183775  878838 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHHostname
	I0429 13:04:16.186655  878838 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 13:04:16.187083  878838 main.go:141] libmachine: (ha-212075-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:05:31", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 14:01:41 +0000 UTC Type:0 Mac:52:54:00:83:05:31 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-212075-m04 Clientid:01:52:54:00:83:05:31}
	I0429 13:04:16.187113  878838 main.go:141] libmachine: (ha-212075-m04) DBG | domain ha-212075-m04 has defined IP address 192.168.39.139 and MAC address 52:54:00:83:05:31 in network mk-ha-212075
	I0429 13:04:16.187269  878838 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHPort
	I0429 13:04:16.187483  878838 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHKeyPath
	I0429 13:04:16.187669  878838 main.go:141] libmachine: (ha-212075-m04) Calling .GetSSHUsername
	I0429 13:04:16.187820  878838 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075-m04/id_rsa Username:docker}
	W0429 13:04:34.703615  878838 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0429 13:04:34.703739  878838 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0429 13:04:34.703757  878838 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0429 13:04:34.703779  878838 status.go:257] ha-212075-m04 status: &{Name:ha-212075-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0429 13:04:34.703802  878838 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-212075 -n ha-212075
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-212075 logs -n 25: (1.832189596s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m04 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp testdata/cp-test.txt                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075:/home/docker/cp-test_ha-212075-m04_ha-212075.txt                       |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075 sudo cat                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075.txt                                 |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m02:/home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m02 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m03:/home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n                                                                 | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | ha-212075-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-212075 ssh -n ha-212075-m03 sudo cat                                          | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC | 29 Apr 24 12:51 UTC |
	|         | /home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-212075 node stop m02 -v=7                                                     | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-212075 node start m02 -v=7                                                    | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-212075 -v=7                                                           | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-212075 -v=7                                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-212075 --wait=true -v=7                                                    | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 13:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-212075                                                                | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 13:01 UTC |                     |
	| node    | ha-212075 node delete m03 -v=7                                                   | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 13:01 UTC | 29 Apr 24 13:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-212075 stop -v=7                                                              | ha-212075 | jenkins | v1.33.0 | 29 Apr 24 13:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:57:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:57:07.760223  876497 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:57:07.760496  876497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:57:07.760505  876497 out.go:304] Setting ErrFile to fd 2...
	I0429 12:57:07.760509  876497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:57:07.760696  876497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:57:07.761408  876497 out.go:298] Setting JSON to false
	I0429 12:57:07.762431  876497 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":77973,"bootTime":1714317455,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:57:07.762504  876497 start.go:139] virtualization: kvm guest
	I0429 12:57:07.765968  876497 out.go:177] * [ha-212075] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:57:07.767581  876497 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:57:07.767593  876497 notify.go:220] Checking for updates...
	I0429 12:57:07.770657  876497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:57:07.772323  876497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:57:07.773571  876497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:57:07.774881  876497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:57:07.776286  876497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:57:07.778067  876497 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:57:07.778212  876497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:57:07.778669  876497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:57:07.778725  876497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:57:07.795136  876497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0429 12:57:07.795626  876497 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:57:07.796273  876497 main.go:141] libmachine: Using API Version  1
	I0429 12:57:07.796304  876497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:57:07.796683  876497 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:57:07.796933  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:57:07.837512  876497 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 12:57:07.838733  876497 start.go:297] selected driver: kvm2
	I0429 12:57:07.838753  876497 start.go:901] validating driver "kvm2" against &{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:57:07.838931  876497 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:57:07.839397  876497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:57:07.839509  876497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:57:07.856158  876497 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:57:07.856991  876497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:57:07.857077  876497 cni.go:84] Creating CNI manager for ""
	I0429 12:57:07.857094  876497 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 12:57:07.857187  876497 start.go:340] cluster config:
	{Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:57:07.857339  876497 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:57:07.860264  876497 out.go:177] * Starting "ha-212075" primary control-plane node in "ha-212075" cluster
	I0429 12:57:07.861641  876497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:57:07.861703  876497 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 12:57:07.861720  876497 cache.go:56] Caching tarball of preloaded images
	I0429 12:57:07.861834  876497 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 12:57:07.861849  876497 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:57:07.862023  876497 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/config.json ...
	I0429 12:57:07.862286  876497 start.go:360] acquireMachinesLock for ha-212075: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:57:07.862362  876497 start.go:364] duration metric: took 48.339µs to acquireMachinesLock for "ha-212075"
	I0429 12:57:07.862383  876497 start.go:96] Skipping create...Using existing machine configuration
	I0429 12:57:07.862393  876497 fix.go:54] fixHost starting: 
	I0429 12:57:07.862725  876497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:57:07.862777  876497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:57:07.878948  876497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0429 12:57:07.879474  876497 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:57:07.880187  876497 main.go:141] libmachine: Using API Version  1
	I0429 12:57:07.880216  876497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:57:07.880562  876497 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:57:07.880783  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:57:07.881015  876497 main.go:141] libmachine: (ha-212075) Calling .GetState
	I0429 12:57:07.882804  876497 fix.go:112] recreateIfNeeded on ha-212075: state=Running err=<nil>
	W0429 12:57:07.882827  876497 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 12:57:07.885265  876497 out.go:177] * Updating the running kvm2 "ha-212075" VM ...
	I0429 12:57:07.887103  876497 machine.go:94] provisionDockerMachine start ...
	I0429 12:57:07.887132  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:57:07.887479  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:07.890580  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:07.891104  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:07.891144  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:07.891318  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:07.891570  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:07.891755  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:07.891925  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:07.892090  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:07.892311  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:07.892323  876497 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:57:08.013219  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075
	
	I0429 12:57:08.013257  876497 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:57:08.013555  876497 buildroot.go:166] provisioning hostname "ha-212075"
	I0429 12:57:08.013586  876497 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:57:08.013815  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.017015  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.017475  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.017527  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.017685  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.017923  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.018104  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.018293  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.018532  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:08.018721  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:08.018733  876497 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-212075 && echo "ha-212075" | sudo tee /etc/hostname
	I0429 12:57:08.160624  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-212075
	
	I0429 12:57:08.160661  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.164072  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.164572  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.164600  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.164930  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.165141  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.165367  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.165523  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.165697  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:08.165944  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:08.165962  876497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-212075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-212075/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-212075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:57:08.284806  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:57:08.284856  876497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 12:57:08.284922  876497 buildroot.go:174] setting up certificates
	I0429 12:57:08.284936  876497 provision.go:84] configureAuth start
	I0429 12:57:08.284950  876497 main.go:141] libmachine: (ha-212075) Calling .GetMachineName
	I0429 12:57:08.285301  876497 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:57:08.288155  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.288573  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.288603  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.288864  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.291442  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.291890  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.291933  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.292115  876497 provision.go:143] copyHostCerts
	I0429 12:57:08.292152  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:57:08.292192  876497 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 12:57:08.292202  876497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 12:57:08.292280  876497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 12:57:08.292365  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:57:08.292383  876497 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 12:57:08.292390  876497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 12:57:08.292421  876497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 12:57:08.292461  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:57:08.292477  876497 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 12:57:08.292483  876497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 12:57:08.292503  876497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 12:57:08.292548  876497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.ha-212075 san=[127.0.0.1 192.168.39.97 ha-212075 localhost minikube]
	I0429 12:57:08.521682  876497 provision.go:177] copyRemoteCerts
	I0429 12:57:08.521785  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:57:08.521818  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.524700  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.525082  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.525111  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.525305  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.525563  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.525751  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.525885  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:57:08.614730  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:57:08.614834  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:57:08.644874  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:57:08.644966  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0429 12:57:08.673023  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:57:08.673123  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:57:08.703550  876497 provision.go:87] duration metric: took 418.591409ms to configureAuth
	I0429 12:57:08.703592  876497 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:57:08.703885  876497 config.go:182] Loaded profile config "ha-212075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:57:08.703995  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:57:08.707033  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.707427  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:57:08.707453  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:57:08.707597  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:57:08.707853  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.708049  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:57:08.708191  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:57:08.708350  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:57:08.708529  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:57:08.708545  876497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:58:39.641978  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:58:39.642026  876497 machine.go:97] duration metric: took 1m31.754906281s to provisionDockerMachine
	I0429 12:58:39.642048  876497 start.go:293] postStartSetup for "ha-212075" (driver="kvm2")
	I0429 12:58:39.642066  876497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:58:39.642088  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:39.642488  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:58:39.642522  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:39.646396  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.646995  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:39.647029  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.647207  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:39.647471  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.647665  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:39.647845  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:58:39.741143  876497 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:58:39.746127  876497 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:58:39.746180  876497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 12:58:39.746258  876497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 12:58:39.746360  876497 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 12:58:39.746376  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 12:58:39.746469  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:58:39.757364  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:58:39.785516  876497 start.go:296] duration metric: took 143.445627ms for postStartSetup
	I0429 12:58:39.785583  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:39.785962  876497 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0429 12:58:39.785996  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:39.789244  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.789721  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:39.789751  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.789957  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:39.790227  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.790424  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:39.790606  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	W0429 12:58:39.892000  876497 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0429 12:58:39.892033  876497 fix.go:56] duration metric: took 1m32.029642416s for fixHost
	I0429 12:58:39.892059  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:39.895314  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.895756  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:39.895785  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:39.896022  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:39.896287  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.896496  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:39.896646  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:39.896896  876497 main.go:141] libmachine: Using SSH client type: native
	I0429 12:58:39.897093  876497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0429 12:58:39.897104  876497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:58:40.016888  876497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714395519.975956801
	
	I0429 12:58:40.016920  876497 fix.go:216] guest clock: 1714395519.975956801
	I0429 12:58:40.016929  876497 fix.go:229] Guest: 2024-04-29 12:58:39.975956801 +0000 UTC Remote: 2024-04-29 12:58:39.892041949 +0000 UTC m=+92.188304742 (delta=83.914852ms)
	I0429 12:58:40.016953  876497 fix.go:200] guest clock delta is within tolerance: 83.914852ms
	I0429 12:58:40.016958  876497 start.go:83] releasing machines lock for "ha-212075", held for 1m32.154586068s
	I0429 12:58:40.016979  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.017307  876497 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:58:40.020445  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.020875  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:40.020898  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.021096  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.021714  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.021956  876497 main.go:141] libmachine: (ha-212075) Calling .DriverName
	I0429 12:58:40.022074  876497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:58:40.022137  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:40.022188  876497 ssh_runner.go:195] Run: cat /version.json
	I0429 12:58:40.022217  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHHostname
	I0429 12:58:40.024900  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025264  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025521  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:40.025555  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025725  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:40.025834  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:40.025868  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:40.025914  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:40.026058  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHPort
	I0429 12:58:40.026145  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:40.026253  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHKeyPath
	I0429 12:58:40.026352  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:58:40.026372  876497 main.go:141] libmachine: (ha-212075) Calling .GetSSHUsername
	I0429 12:58:40.026530  876497 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/ha-212075/id_rsa Username:docker}
	I0429 12:58:40.109004  876497 ssh_runner.go:195] Run: systemctl --version
	I0429 12:58:40.142683  876497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:58:40.319745  876497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:58:40.326473  876497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:58:40.326595  876497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:58:40.337050  876497 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 12:58:40.337094  876497 start.go:494] detecting cgroup driver to use...
	I0429 12:58:40.337209  876497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:58:40.355818  876497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:58:40.371448  876497 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:58:40.371520  876497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:58:40.387986  876497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:58:40.404031  876497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:58:40.575327  876497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:58:40.733824  876497 docker.go:233] disabling docker service ...
	I0429 12:58:40.733921  876497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:58:40.752190  876497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:58:40.767626  876497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:58:40.922535  876497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:58:41.075177  876497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:58:41.090078  876497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:58:41.113300  876497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 12:58:41.113379  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.125378  876497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:58:41.125452  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.137565  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.149947  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.162213  876497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:58:41.175054  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.187255  876497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.200488  876497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:58:41.213075  876497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:58:41.224002  876497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:58:41.234785  876497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:58:41.382183  876497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:58:44.366488  876497 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.984255379s)
	I0429 12:58:44.366528  876497 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:58:44.366594  876497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:58:44.372536  876497 start.go:562] Will wait 60s for crictl version
	I0429 12:58:44.372606  876497 ssh_runner.go:195] Run: which crictl
	I0429 12:58:44.376918  876497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:58:44.417731  876497 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 12:58:44.417813  876497 ssh_runner.go:195] Run: crio --version
	I0429 12:58:44.452142  876497 ssh_runner.go:195] Run: crio --version
	I0429 12:58:44.488643  876497 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 12:58:44.490072  876497 main.go:141] libmachine: (ha-212075) Calling .GetIP
	I0429 12:58:44.493014  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:44.493427  876497 main.go:141] libmachine: (ha-212075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:df", ip: ""} in network mk-ha-212075: {Iface:virbr1 ExpiryTime:2024-04-29 13:47:25 +0000 UTC Type:0 Mac:52:54:00:c0:56:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-212075 Clientid:01:52:54:00:c0:56:df}
	I0429 12:58:44.493455  876497 main.go:141] libmachine: (ha-212075) DBG | domain ha-212075 has defined IP address 192.168.39.97 and MAC address 52:54:00:c0:56:df in network mk-ha-212075
	I0429 12:58:44.493704  876497 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:58:44.499327  876497 kubeadm.go:877] updating cluster {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:58:44.499532  876497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:58:44.499597  876497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:58:44.556469  876497 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:58:44.556505  876497 crio.go:433] Images already preloaded, skipping extraction
	I0429 12:58:44.556586  876497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:58:44.596784  876497 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:58:44.596814  876497 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:58:44.596825  876497 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.30.0 crio true true} ...
	I0429 12:58:44.596945  876497 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-212075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:58:44.597018  876497 ssh_runner.go:195] Run: crio config
	I0429 12:58:44.646276  876497 cni.go:84] Creating CNI manager for ""
	I0429 12:58:44.646302  876497 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 12:58:44.646314  876497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:58:44.646348  876497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-212075 NodeName:ha-212075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:58:44.646515  876497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-212075"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:58:44.646541  876497 kube-vip.go:111] generating kube-vip config ...
	I0429 12:58:44.646600  876497 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:58:44.659263  876497 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:58:44.659421  876497 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:58:44.659499  876497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:58:44.670248  876497 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:58:44.670328  876497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:58:44.681450  876497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:58:44.700953  876497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:58:44.720535  876497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 12:58:44.740624  876497 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 12:58:44.761676  876497 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:58:44.766352  876497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:58:44.914881  876497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:58:44.932810  876497 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075 for IP: 192.168.39.97
	I0429 12:58:44.932837  876497 certs.go:194] generating shared ca certs ...
	I0429 12:58:44.932865  876497 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:58:44.933021  876497 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 12:58:44.933063  876497 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 12:58:44.933072  876497 certs.go:256] generating profile certs ...
	I0429 12:58:44.933174  876497 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/client.key
	I0429 12:58:44.933204  876497 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add
	I0429 12:58:44.933221  876497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.36 192.168.39.109 192.168.39.254]
	I0429 12:58:45.021686  876497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add ...
	I0429 12:58:45.021731  876497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add: {Name:mkda5cf7a551c10d59d01499fd8843801e13ca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:58:45.021929  876497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add ...
	I0429 12:58:45.021942  876497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add: {Name:mk9cfa1f4f18d14688e73084dd18bea565efeb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:58:45.022012  876497 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt.fcab5add -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt
	I0429 12:58:45.022167  876497 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key.fcab5add -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key
	I0429 12:58:45.022296  876497 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key
	I0429 12:58:45.022313  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:58:45.022326  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:58:45.022339  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:58:45.022350  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:58:45.022362  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:58:45.022372  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:58:45.022381  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:58:45.022391  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:58:45.022442  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 12:58:45.022479  876497 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 12:58:45.022488  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:58:45.022508  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 12:58:45.022530  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:58:45.022551  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 12:58:45.022586  876497 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 12:58:45.022611  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.022625  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.022639  876497 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.023431  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:58:45.053176  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:58:45.113644  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:58:45.213487  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:58:45.268507  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 12:58:45.300357  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:58:45.341025  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:58:45.389050  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/ha-212075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:58:45.439204  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:58:45.473879  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 12:58:45.534205  876497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 12:58:45.593487  876497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:58:45.617841  876497 ssh_runner.go:195] Run: openssl version
	I0429 12:58:45.625208  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:58:45.639145  876497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.645289  876497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.645362  876497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:58:45.652141  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:58:45.662642  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 12:58:45.675557  876497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.681110  876497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.681198  876497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 12:58:45.687881  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 12:58:45.698363  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 12:58:45.710392  876497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.715399  876497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.715470  876497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 12:58:45.722054  876497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:58:45.736227  876497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:58:45.743635  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 12:58:45.755675  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 12:58:45.762158  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 12:58:45.770383  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 12:58:45.777331  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 12:58:45.784427  876497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 12:58:45.791401  876497 kubeadm.go:391] StartCluster: {Name:ha-212075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-212075 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.109 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.139 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:58:45.791564  876497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 12:58:45.791632  876497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 12:58:45.842472  876497 cri.go:89] found id: "d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf"
	I0429 12:58:45.842496  876497 cri.go:89] found id: "aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3"
	I0429 12:58:45.842499  876497 cri.go:89] found id: "af3e12529064e8cb13b80b365a008f82407a7550d1a33ab25a6ced09b9cdaebd"
	I0429 12:58:45.842502  876497 cri.go:89] found id: "5a8ef3d0d8f3019d5301958b70e597d258e43311b86fd5735f6f519d7eda183e"
	I0429 12:58:45.842505  876497 cri.go:89] found id: "127376dce0d17a01837d92104efb9f706143a8043ae9f7dd72e0f9e8471f1992"
	I0429 12:58:45.842508  876497 cri.go:89] found id: "161197324cadc877ec57a18139e26d918b0f6b141d1995f3917c73b97604b834"
	I0429 12:58:45.842511  876497 cri.go:89] found id: "8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad"
	I0429 12:58:45.842513  876497 cri.go:89] found id: "a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6"
	I0429 12:58:45.842515  876497 cri.go:89] found id: "7101dd3def458bbf8264638d74ec7010e02872566430ade0c9a8f549d0f5f99f"
	I0429 12:58:45.842522  876497 cri.go:89] found id: "ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d"
	I0429 12:58:45.842524  876497 cri.go:89] found id: "0c3dc33eb6d5db4717d250127a1abbf0202bb3ce7056499e1673e69d9884a523"
	I0429 12:58:45.842527  876497 cri.go:89] found id: "220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf"
	I0429 12:58:45.842529  876497 cri.go:89] found id: "382081d5ba19b58fe32cebbee291e6aa1da39c9e68841bb337d713605174e64d"
	I0429 12:58:45.842532  876497 cri.go:89] found id: "e9f8269450f858894c1b94127abd7a27936ce2cc4abbb18d34b631473dd5df16"
	I0429 12:58:45.842537  876497 cri.go:89] found id: "6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab"
	I0429 12:58:45.842542  876497 cri.go:89] found id: ""
	I0429 12:58:45.842601  876497 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.420188375Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395875420152236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c646970-a1a2-4164-a7c3-8821d9c213f0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.421231759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6663da2-d464-4057-816f-bff27053abb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.421313764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6663da2-d464-4057-816f-bff27053abb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.421903449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6663da2-d464-4057-816f-bff27053abb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.476054926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cec52fc-3438-46a4-a1a6-1c53cac87795 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.476184810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cec52fc-3438-46a4-a1a6-1c53cac87795 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.478419959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a3572dc-d112-4083-a847-d31ef4d1ec4d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.479031977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395875478999763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a3572dc-d112-4083-a847-d31ef4d1ec4d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.480166084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f10a75a-aa76-427c-ad27-4e1de06f2459 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.480277936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f10a75a-aa76-427c-ad27-4e1de06f2459 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.481021087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f10a75a-aa76-427c-ad27-4e1de06f2459 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.531101352Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88273e91-edc6-42b7-96ce-56a0d8a93534 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.531198973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88273e91-edc6-42b7-96ce-56a0d8a93534 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.532509123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15ea077e-9d4c-4432-a9eb-be8ea967ca31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.533716547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395875533622776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15ea077e-9d4c-4432-a9eb-be8ea967ca31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.534627120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4e18e09-6d84-4a7f-bb4a-72a1ae9f2f63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.534747621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4e18e09-6d84-4a7f-bb4a-72a1ae9f2f63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.535203957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4e18e09-6d84-4a7f-bb4a-72a1ae9f2f63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.586905486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa2061f9-a6d3-4fda-abd7-ac0e5912af72 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.586984425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa2061f9-a6d3-4fda-abd7-ac0e5912af72 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.591207273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d275bf90-acc9-4318-ad10-4ffb2c0d8ef1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.591790489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714395875591760041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d275bf90-acc9-4318-ad10-4ffb2c0d8ef1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.592549314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bc42846-cdc1-4961-9da6-e17bbeaaf072 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.592641751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bc42846-cdc1-4961-9da6-e17bbeaaf072 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:04:35 ha-212075 crio[3787]: time="2024-04-29 13:04:35.593703542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38b45c6d5e0562d59b11b2c470a91a95857f930b8cdfe320f2cf1488c3eafc0d,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714395625154612291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714395610157517364,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714395576166564614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b,PodSandboxId:10bfd9ce78ec84109f280713540fa6557ac224f7e1cafc99e800f31518811f45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714395573169264255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2d2b6-bf65-4b8a-ba39-9c99a83f633e,},Annotations:map[string]string{io.kubernetes.container.hash: 493ddf20,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714395573174037525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83af75676bbe6afd3c073604c7cc641a360d98219bc8e4b654fa11a96290bb9,PodSandboxId:a6ffea1acb446813ac5936d914f9cc66cd4f5414b37a834ccdfc20e8daa6b4a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714395564459480491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kubernetes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0861c5f41887f49087d963af1842bb48e3e943167d1066721056cb0dcbc83314,PodSandboxId:423790ec30aaaa83a925e7874bae2890cafb4f8a2747475bc4ca473225fedf87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714395545667607591,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df78e2aba17ba8d18dc89fa959ae7081,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a,PodSandboxId:3948dd1b6cd6561d61b4b802460144a7650eec0d3a0a91902129640ae1e04065,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714395531096935729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e873d6
644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc,PodSandboxId:c36be2c0ab43c61ed5d800855f6d6452b2a3fcf4c136b2e64d40b334af7e2c57,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395531289731836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb,PodSandboxId:37bb5572bfa3b98ed47dd2f53760be97fda83ea86250c49beaa0143f08b5db51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714395531124941419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7,PodSandboxId:24576b10ad16b5f82ee95e08befa86e85d912995eb7945a0838531667a558b5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714395530981006633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99ab7da4da37d23a0aa069a82f24c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 58a58029,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361,PodSandboxId:4c603040e7937cd3dee21e8220bf3b50af0613d117e08add7215cb4fe242fbe4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714395530893931337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e05ac5498423338e4375f7ce45dcdf,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe,PodSandboxId:f4c3605d2255dabe4e5cafcfa8bccdf4aa7748035936d71c4fc2760b406e6bf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714395530840369617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf,PodSandboxId:cb8f4bd32e3dfb8fdb3a5ab1770fb13c8a70a902a73af6bfecef2f925a521687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714395525528872230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b71f12-5d80-4c41-ae97-a4d7e023ec98,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad8b079,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3,PodSandboxId:69284408d503f26279ac699859c2ca9ffd6138494aaf285d0f2dde0c68f34e22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714395525351793040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6079fd69c4d0760ef90f4fbb02bb18dbdcde3b26b5bfdb046497ef9e4bd5d23b,PodSandboxId:377fa41dd93a551baa7d287a8b847531a704550acfb115938d6e18094855ac00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714395035320635248,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rcq9m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de803f70-5f57-4282-af1e-47845231d712,},Annotations:map[string]string{io.kuberne
tes.container.hash: a1064d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad,PodSandboxId:e0bec542cd6898b5d41c6ac7b1cad838458cccef07b8d2122fb3f58fe1c6c984,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890366495942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2t8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 343d2b3e-1dde-4bf1-b27a-d720d1b21ef4,},Annotations:map[string]string{io.kubernetes.container.hash: d270191f,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6,PodSandboxId:cb2c23b3b3b1c4bbb2519fa0078398bfcd6b18e73f88235242c9233a0f905bac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714394890318755136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-x299s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b065a-2b42-4ac5-889e-c18200f43691,},Annotations:map[string]string{io.kubernetes.container.hash: 53e9084a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d,PodSandboxId:84bca27dac841d90c559f5b2bf511a7a2b8d1f6611b69b2befe3fa6b8841db52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714394887675617561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ncdsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632757a3-fa64-4483-af75-828e292ce184,},Annotations:map[string]string{io.kubernetes.container.hash: f6ee901a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf,PodSandboxId:258b9f1c2d733c4da858a872c6e9abc6c07bd92752810e1f632d1cf51d82f7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714394866813961140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef10b09e6abe0d5e22898bbab1b91b6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab,PodSandboxId:814df27c007a6d8d0f71243b0c74e23b9ff81a600847b10a9450247db7678439,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714394866667344702,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-212075,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f41646a8b279b2bde6d2412bfb785c,},Annotations:map[string]string{io.kubernetes.container.hash: 6647d5b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bc42846-cdc1-4961-9da6-e17bbeaaf072 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	38b45c6d5e056       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   10bfd9ce78ec8       storage-provisioner
	de07ba1aa2df4       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   cb8f4bd32e3df       kindnet-vnw75
	b6086e564f79a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   4c603040e7937       kube-controller-manager-ha-212075
	745e6582bceda       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Running             kube-apiserver            3                   24576b10ad16b       kube-apiserver-ha-212075
	d42656388820e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   10bfd9ce78ec8       storage-provisioner
	d83af75676bbe       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   a6ffea1acb446       busybox-fc5497c4f-rcq9m
	0861c5f41887f       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   423790ec30aaa       kube-vip-ha-212075
	12e873d6644ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c36be2c0ab43c       coredns-7db6d8ff4d-x299s
	38b6240c50d49       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   37bb5572bfa3b       etcd-ha-212075
	68930ae1a81a7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   3948dd1b6cd65       kube-proxy-ncdsk
	f5a8e5fbbfe64       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   24576b10ad16b       kube-apiserver-ha-212075
	41876051108d0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   4c603040e7937       kube-controller-manager-ha-212075
	47ff59770e077       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   f4c3605d2255d       kube-scheduler-ha-212075
	d94e4c884e2e5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   cb8f4bd32e3df       kindnet-vnw75
	aa2b53bbde63e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   69284408d503f       coredns-7db6d8ff4d-c2t8g
	6079fd69c4d07       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   377fa41dd93a5       busybox-fc5497c4f-rcq9m
	8923eb9969f74       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   e0bec542cd689       coredns-7db6d8ff4d-c2t8g
	a7bedc2be5698       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   cb2c23b3b3b1c       coredns-7db6d8ff4d-x299s
	ae027e60b2a1e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      16 minutes ago      Exited              kube-proxy                0                   84bca27dac841       kube-proxy-ncdsk
	220538e592762       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      16 minutes ago      Exited              kube-scheduler            0                   258b9f1c2d733       kube-scheduler-ha-212075
	6ba91c742f08c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   814df27c007a6       etcd-ha-212075
	
	
	==> coredns [12e873d6644ac7edf5acd530b62af9a507f5267fff159c4308367205aee43acc] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39830->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39830->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35226->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35226->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39814->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1891097803]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 12:59:05.609) (total time: 10680ms):
	Trace[1891097803]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39814->10.96.0.1:443: read: connection reset by peer 10680ms (12:59:16.289)
	Trace[1891097803]: [10.680502529s] [10.680502529s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39814->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8923eb9969f74944eebcb146bad6f62077fbfc7fb2419f92960ba2f5fd5974ad] <==
	[INFO] 10.244.0.4:39294 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002366491s
	[INFO] 10.244.0.4:47691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095562s
	[INFO] 10.244.0.4:49991 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146136s
	[INFO] 10.244.0.4:45880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133788s
	[INFO] 10.244.2.2:40297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017628s
	[INFO] 10.244.2.2:44282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001974026s
	[INFO] 10.244.2.2:48058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199321s
	[INFO] 10.244.2.2:50097 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220995s
	[INFO] 10.244.2.2:60877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132114s
	[INFO] 10.244.2.2:38824 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114121s
	[INFO] 10.244.1.2:60691 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192262s
	[INFO] 10.244.1.2:51664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123427s
	[INFO] 10.244.1.2:57326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156295s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105493s
	[INFO] 10.244.0.4:39454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248959s
	[INFO] 10.244.2.2:56559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010789s
	[INFO] 10.244.1.2:57860 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144445s
	[INFO] 10.244.1.2:40470 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145332s
	[INFO] 10.244.0.4:35067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124783s
	[INFO] 10.244.2.2:47889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150138s
	[INFO] 10.244.2.2:60310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091159s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1858&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1864&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [a7bedc2be56987be4610d367e93c3951fbe9f41a3abb1cb27668d1b7b2488cf6] <==
	[INFO] 10.244.1.2:43604 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158954s
	[INFO] 10.244.0.4:58453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127934s
	[INFO] 10.244.0.4:52484 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001277738s
	[INFO] 10.244.0.4:47770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128102s
	[INFO] 10.244.0.4:53060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103039s
	[INFO] 10.244.2.2:55991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854135s
	[INFO] 10.244.2.2:33533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090157s
	[INFO] 10.244.1.2:52893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090867s
	[INFO] 10.244.0.4:54479 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102901s
	[INFO] 10.244.0.4:53525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000359828s
	[INFO] 10.244.2.2:57755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000264423s
	[INFO] 10.244.2.2:47852 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118616s
	[INFO] 10.244.2.2:38289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112347s
	[INFO] 10.244.1.2:55092 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184788s
	[INFO] 10.244.1.2:52235 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146353s
	[INFO] 10.244.0.4:55598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137209s
	[INFO] 10.244.0.4:54649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121493s
	[INFO] 10.244.0.4:50694 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136791s
	[INFO] 10.244.2.2:49177 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104896s
	[INFO] 10.244.2.2:41037 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088839s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [aa2b53bbde63e4229a74f414642026c87138383321370abca2722060911728d3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[5625112]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 12:58:57.212) (total time: 10001ms):
	Trace[5625112]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:59:07.214)
	Trace[5625112]: [10.001578514s] [10.001578514s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35316->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35316->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-212075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_54_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:04:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:59:32 +0000   Mon, 29 Apr 2024 12:48:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-212075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eefe9cc034f74464a919edd5f6b61c2b
	  System UUID:                eefe9cc0-34f7-4464-a919-edd5f6b61c2b
	  Boot ID:                    20b6e47d-4696-4b2a-ba7c-62e73184f5c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rcq9m              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-c2t8g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-x299s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-212075                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-vnw75                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-212075             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-212075    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-ncdsk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-212075             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-212075                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m2s               kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node ha-212075 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node ha-212075 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node ha-212075 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   NodeReady                16m                kubelet          Node ha-212075 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Warning  ContainerGCFailed        6m43s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m56s              node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   RegisteredNode           4m48s              node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	  Normal   RegisteredNode           3m7s               node-controller  Node ha-212075 event: Registered Node ha-212075 in Controller
	
	
	Name:               ha-212075-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_49_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:48:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:04:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:03:20 +0000   Mon, 29 Apr 2024 13:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:03:20 +0000   Mon, 29 Apr 2024 13:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:03:20 +0000   Mon, 29 Apr 2024 13:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:03:20 +0000   Mon, 29 Apr 2024 13:03:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    ha-212075-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 088c5f79339047d6aaf2c88397c97942
	  System UUID:                088c5f79-3390-47d6-aaf2-c88397c97942
	  Boot ID:                    22594bb6-fb8b-4284-9426-944febb4fe41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9q8rf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-212075-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-sx2zd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-212075-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-212075-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-sfmhh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-212075-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-212075-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m58s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-212075-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-212075-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-212075-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-212075-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-212075-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-212075-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-212075-m02 event: Registered Node ha-212075-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-212075-m02 status is now: NodeNotReady
	
	
	Name:               ha-212075-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-212075-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=ha-212075
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_51_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:51:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-212075-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:02:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:02:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:02:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:02:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 13:01:46 +0000   Mon, 29 Apr 2024 13:02:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-212075-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee58aa83584b463285f294fa28d19e05
	  System UUID:                ee58aa83-584b-4632-85f2-94fa28d19e05
	  Boot ID:                    9a189d51-10ab-493c-a851-4fd7b23ecd1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nd8fq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kindnet-d6tbw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bnbr8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-212075-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-212075-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-212075-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-212075-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m56s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   NodeNotReady             4m15s                  node-controller  Node ha-212075-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-212075-m04 event: Registered Node ha-212075-m04 in Controller
	  Normal   Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m50s (x2 over 2m50s)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m50s (x2 over 2m50s)  kubelet          Node ha-212075-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m50s (x2 over 2m50s)  kubelet          Node ha-212075-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m50s                  kubelet          Node ha-212075-m04 has been rebooted, boot id: 9a189d51-10ab-493c-a851-4fd7b23ecd1e
	  Normal   NodeReady                2m50s                  kubelet          Node ha-212075-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-212075-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.473754] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.066120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063959] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.171013] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136787] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.290881] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.567542] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.067175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.833264] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +1.214053] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.340857] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.082766] kauditd_printk_skb: 40 callbacks suppressed
	[Apr29 12:48] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.082460] kauditd_printk_skb: 72 callbacks suppressed
	[Apr29 12:58] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.174026] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.186334] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +0.148076] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.314140] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +3.531271] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.908485] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.004807] kauditd_printk_skb: 2 callbacks suppressed
	[Apr29 12:59] kauditd_printk_skb: 58 callbacks suppressed
	[  +9.062141] kauditd_printk_skb: 1 callbacks suppressed
	[ +28.166402] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [38b6240c50d49ae6ebb8790fae0e2808871cd68175792e988fda5df5773fd8cb] <==
	{"level":"info","ts":"2024-04-29T13:01:10.602242Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:01:10.636476Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"e28237f435b7165","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-29T13:01:10.636536Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:01:10.645078Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"e28237f435b7165","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T13:01:10.645371Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.196609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(4189311317648796294 17735085251460689206)"}
	{"level":"info","ts":"2024-04-29T13:02:01.199157Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","removed-remote-peer-id":"e28237f435b7165","removed-remote-peer-urls":["https://192.168.39.109:2380"]}
	{"level":"warn","ts":"2024-04-29T13:02:01.199437Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"f61fae125a956d36","removed-member-id":"e28237f435b7165"}
	{"level":"warn","ts":"2024-04-29T13:02:01.199498Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-04-29T13:02:01.199299Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e28237f435b7165"}
	{"level":"warn","ts":"2024-04-29T13:02:01.200093Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.200179Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e28237f435b7165"}
	{"level":"warn","ts":"2024-04-29T13:02:01.201123Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.201222Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.201474Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"warn","ts":"2024-04-29T13:02:01.201814Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165","error":"context canceled"}
	{"level":"warn","ts":"2024-04-29T13:02:01.2019Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e28237f435b7165","error":"failed to read e28237f435b7165 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-29T13:02:01.201965Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"warn","ts":"2024-04-29T13:02:01.202254Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165","error":"context canceled"}
	{"level":"info","ts":"2024-04-29T13:02:01.20232Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.202401Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.202471Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"f61fae125a956d36","removed-remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T13:02:01.202547Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"f61fae125a956d36","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"e28237f435b7165"}
	{"level":"warn","ts":"2024-04-29T13:02:01.220935Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.109:53582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-29T13:02:01.223863Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.109:53578","server-name":"","error":"read tcp 192.168.39.97:2380->192.168.39.109:53578: read: connection reset by peer"}
	
	
	==> etcd [6ba91c742f08ce5ca7f85885956dbab905a45671f8b29cd3697fce939dae35ab] <==
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.884393Z","caller":"traceutil/trace.go:171","msg":"trace[2076736015] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"8.455297133s","start":"2024-04-29T12:57:00.429093Z","end":"2024-04-29T12:57:08.88439Z","steps":["trace[2076736015] 'agreement among raft nodes before linearized reading'  (duration: 8.455290936s)"],"step_count":1}
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.884425Z","caller":"traceutil/trace.go:171","msg":"trace[494929290] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; }","duration":"7.397452906s","start":"2024-04-29T12:57:01.486965Z","end":"2024-04-29T12:57:08.884418Z","steps":["trace[494929290] 'agreement among raft nodes before linearized reading'  (duration: 7.397441637s)"],"step_count":1}
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.88321Z","caller":"traceutil/trace.go:171","msg":"trace[1933475723] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"7.146336445s","start":"2024-04-29T12:57:01.736871Z","end":"2024-04-29T12:57:08.883208Z","steps":["trace[1933475723] 'agreement among raft nodes before linearized reading'  (duration: 7.146332305s)"],"step_count":1}
	2024/04/29 12:57:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T12:57:08.912543Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"f61fae125a956d36","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-29T12:57:08.91285Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.912903Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.912964Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913027Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.9131Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913199Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913236Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a236c728dfaea86"}
	{"level":"info","ts":"2024-04-29T12:57:08.913263Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.91331Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.91337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913445Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913513Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913621Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.913842Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e28237f435b7165"}
	{"level":"info","ts":"2024-04-29T12:57:08.917921Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-29T12:57:08.918145Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-29T12:57:08.918193Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-212075","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> kernel <==
	 13:04:36 up 17 min,  0 users,  load average: 0.07, 0.20, 0.18
	Linux ha-212075 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d94e4c884e2e52d53f50102dd82186f25bb4cc3a036a2d29fde71da9a5be8fbf] <==
	I0429 12:58:45.888374       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 12:58:45.888435       1 main.go:107] hostIP = 192.168.39.97
	podIP = 192.168.39.97
	I0429 12:58:45.889142       1 main.go:116] setting mtu 1500 for CNI 
	I0429 12:58:45.889170       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 12:58:45.889194       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 12:58:46.280219       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 12:58:48.641119       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 12:58:51.713633       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 12:59:03.722194       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0429 12:59:13.248979       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.239:58660->10.96.0.1:443: read: connection reset by peer
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.239:58660->10.96.0.1:443: read: connection reset by peer
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [de07ba1aa2df47d767de97c3009f61029b276ada51dfa9b7ebf954f4eb4ac21a] <==
	I0429 13:03:51.344588       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:04:01.352929       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:04:01.353022       1 main.go:227] handling current node
	I0429 13:04:01.353047       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:04:01.353065       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:04:01.353203       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:04:01.353224       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:04:11.366543       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:04:11.367040       1 main.go:227] handling current node
	I0429 13:04:11.367076       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:04:11.367098       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:04:11.367582       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:04:11.367620       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:04:21.382255       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:04:21.382298       1 main.go:227] handling current node
	I0429 13:04:21.382310       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:04:21.382315       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:04:21.382420       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:04:21.382475       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	I0429 13:04:31.394339       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0429 13:04:31.394497       1 main.go:227] handling current node
	I0429 13:04:31.394551       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0429 13:04:31.394573       1 main.go:250] Node ha-212075-m02 has CIDR [10.244.1.0/24] 
	I0429 13:04:31.394781       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0429 13:04:31.394813       1 main.go:250] Node ha-212075-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [745e6582bceda2300c900747e2f6c233aa65486eead9e301b039f48bc32fd8c7] <==
	I0429 12:59:36.085441       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 12:59:36.163871       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 12:59:36.168608       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 12:59:36.173233       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 12:59:36.173267       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 12:59:36.175737       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 12:59:36.175821       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 12:59:36.175827       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 12:59:36.175925       1 aggregator.go:165] initial CRD sync complete...
	I0429 12:59:36.175945       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 12:59:36.175950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 12:59:36.175955       1 cache.go:39] Caches are synced for autoregister controller
	I0429 12:59:36.192539       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 12:59:36.198637       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 12:59:36.202433       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 12:59:36.202502       1 policy_source.go:224] refreshing policies
	W0429 12:59:36.226973       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.36]
	I0429 12:59:36.228475       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 12:59:36.242387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0429 12:59:36.251731       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0429 12:59:36.292013       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 12:59:37.070264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 12:59:37.584379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.36 192.168.39.97]
	W0429 12:59:47.581966       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.36 192.168.39.97]
	W0429 13:02:17.590144       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.36 192.168.39.97]
	
	
	==> kube-apiserver [f5a8e5fbbfe64261bc7fff8e515fbe8cee0c9c4c523c272e67c109b5bfc766b7] <==
	I0429 12:58:51.593850       1 options.go:221] external host was not specified, using 192.168.39.97
	I0429 12:58:51.595825       1 server.go:148] Version: v1.30.0
	I0429 12:58:51.595921       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:58:52.219138       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0429 12:58:52.233868       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 12:58:52.239867       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 12:58:52.239934       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 12:58:52.240152       1 instance.go:299] Using reconciler: lease
	W0429 12:59:12.211981       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0429 12:59:12.212041       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0429 12:59:12.241559       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [41876051108d0c3cbeae94e738c5f2f6cfa1cfc761ad5f01a4a6aa70908d7361] <==
	I0429 12:58:51.950198       1 serving.go:380] Generated self-signed cert in-memory
	I0429 12:58:52.604116       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 12:58:52.604159       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:58:52.605975       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 12:58:52.606092       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 12:58:52.606188       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 12:58:52.606370       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0429 12:59:13.249485       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.97:8443/healthz\": dial tcp 192.168.39.97:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b6086e564f79ae4f0930ba7565fab3ae4f9e52ff3b48cdc30b00e97ccd2ef5be] <==
	I0429 13:01:58.147121       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.280553ms"
	I0429 13:01:58.147225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.812µs"
	I0429 13:01:59.280552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.95021ms"
	I0429 13:01:59.281166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="236.303µs"
	I0429 13:01:59.944951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.265µs"
	I0429 13:01:59.962791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.147µs"
	I0429 13:02:00.270235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.292µs"
	I0429 13:02:00.280235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.156µs"
	I0429 13:02:12.859120       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-212075-m04"
	E0429 13:02:28.731859       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:28.731997       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:28.732023       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:28.732048       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:28.732072       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:48.733364       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:48.733493       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:48.733531       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:48.733555       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	E0429 13:02:48.733600       1 gc_controller.go:153] "Failed to get node" err="node \"ha-212075-m03\" not found" logger="pod-garbage-collector-controller" node="ha-212075-m03"
	I0429 13:02:48.853792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.055208ms"
	I0429 13:02:48.855320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.744µs"
	I0429 13:02:51.471094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.178673ms"
	I0429 13:02:51.471188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.074µs"
	I0429 13:03:28.737346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.347135ms"
	I0429 13:03:28.737531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.209µs"
	
	
	==> kube-proxy [68930ae1a81a7657298688c75b30948aebf127287261a872e89f71ec65a9e65a] <==
	I0429 12:58:52.696573       1 server_linux.go:69] "Using iptables proxy"
	E0429 12:58:53.249319       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:58:56.322127       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:58:59.393763       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:59:05.537128       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 12:59:14.753595       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-212075\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0429 12:59:32.973597       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0429 12:59:33.057634       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:59:33.057741       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:59:33.057760       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:59:33.065789       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:59:33.066577       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:59:33.066612       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:59:33.069132       1 config.go:192] "Starting service config controller"
	I0429 12:59:33.069175       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:59:33.069417       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:59:33.069424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:59:33.070929       1 config.go:319] "Starting node config controller"
	I0429 12:59:33.070959       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:59:33.169434       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:59:33.169765       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 12:59:33.171482       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ae027e60b2a1eaabf421377161cd1baef302a90ef2098603aef1534cd97af30d] <==
	E0429 12:55:58.148221       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:01.217346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:01.217549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:01.217735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:01.217795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:04.289341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:04.290001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:07.362339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:07.362643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:07.362557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:07.362737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:10.433935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:10.434131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:16.579827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:16.580061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:19.651953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:19.652269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:22.722770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:22.722999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:38.083091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:38.083202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1790": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:38.083436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:38.083560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 12:56:41.154641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 12:56:41.155293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-212075&resourceVersion=1859": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [220538e592762feea78576a891b3a24ffcc0f8de3708d743c54c2d703427e0cf] <==
	W0429 12:57:05.016269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:57:05.016415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:57:05.065153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 12:57:05.065253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 12:57:05.116701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:57:05.116749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:57:05.141571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:57:05.141717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:57:05.217620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:57:05.217746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:57:05.228465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:57:05.228537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:57:05.257487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:57:05.257636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:57:05.577452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:57:05.577557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 12:57:05.676718       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:57:05.676811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:57:05.716424       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:57:05.716581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 12:57:06.031254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:57:06.031372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:57:08.388157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:57:08.388256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:57:08.829244       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [47ff59770e07774ef5a30c318f8486fa7674a7d8d17b21b25ec1fbd847f3b9fe] <==
	W0429 12:59:31.421593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.97:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:31.421644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.97:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.224096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.97:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.224163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.97:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.375354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.375473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.460486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.97:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.460626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.97:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:32.950832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:32.950967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:33.073910       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.97:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:33.073971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.97:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:33.198249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0429 12:59:33.198429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0429 12:59:36.169385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:59:36.169419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:59:36.169476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 12:59:36.172621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 12:59:36.177235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:59:36.177443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0429 12:59:50.456414       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 13:01:57.880236       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nd8fq\": pod busybox-fc5497c4f-nd8fq is already assigned to node \"ha-212075-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-nd8fq" node="ha-212075-m04"
	E0429 13:01:57.881964       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ab025066-7c3e-4b1f-83c2-0d692de5732b(default/busybox-fc5497c4f-nd8fq) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-nd8fq"
	E0429 13:01:57.882100       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nd8fq\": pod busybox-fc5497c4f-nd8fq is already assigned to node \"ha-212075-m04\"" pod="default/busybox-fc5497c4f-nd8fq"
	I0429 13:01:57.882155       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-nd8fq" node="ha-212075-m04"
	
	
	==> kubelet <==
	Apr 29 13:00:25 ha-212075 kubelet[1362]: I0429 13:00:25.134463    1362 scope.go:117] "RemoveContainer" containerID="d42656388820e9c297867f24da758c57066a51bbe02371f7769d281b72afc50b"
	Apr 29 13:00:36 ha-212075 kubelet[1362]: I0429 13:00:36.134583    1362 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-212075" podUID="44e6d402-7c09-4c33-9905-15f9d4a29381"
	Apr 29 13:00:36 ha-212075 kubelet[1362]: I0429 13:00:36.156963    1362 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-212075"
	Apr 29 13:00:36 ha-212075 kubelet[1362]: I0429 13:00:36.506593    1362 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-212075" podUID="44e6d402-7c09-4c33-9905-15f9d4a29381"
	Apr 29 13:00:43 ha-212075 kubelet[1362]: I0429 13:00:43.159099    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-212075" podStartSLOduration=7.159070499 podStartE2EDuration="7.159070499s" podCreationTimestamp="2024-04-29 13:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:00:43.158903899 +0000 UTC m=+770.146398177" watchObservedRunningTime="2024-04-29 13:00:43.159070499 +0000 UTC m=+770.146564776"
	Apr 29 13:00:53 ha-212075 kubelet[1362]: E0429 13:00:53.152946    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:00:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:00:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:00:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:00:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:01:53 ha-212075 kubelet[1362]: E0429 13:01:53.154265    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:01:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:01:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:01:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:01:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:02:53 ha-212075 kubelet[1362]: E0429 13:02:53.152356    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:02:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:02:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:02:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:02:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:03:53 ha-212075 kubelet[1362]: E0429 13:03:53.153922    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:03:53 ha-212075 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:03:53 ha-212075 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:03:53 ha-212075 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:03:53 ha-212075 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:04:35.107091  878997 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18773-847310/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-212075 -n ha-212075
helpers_test.go:261: (dbg) Run:  kubectl --context ha-212075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-404116
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-404116
E0429 13:21:19.253260  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-404116: exit status 82 (2m2.781259056s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-404116-m03"  ...
	* Stopping node "multinode-404116-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-404116" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404116 --wait=true -v=8 --alsologtostderr
E0429 13:24:22.301063  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404116 --wait=true -v=8 --alsologtostderr: (3m0.062667909s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-404116
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-404116 -n multinode-404116
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-404116 logs -n 25: (1.834429934s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile403422532/001/cp-test_multinode-404116-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116:/home/docker/cp-test_multinode-404116-m02_multinode-404116.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116 sudo cat                                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m02_multinode-404116.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03:/home/docker/cp-test_multinode-404116-m02_multinode-404116-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116-m03 sudo cat                                   | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m02_multinode-404116-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp testdata/cp-test.txt                                                | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile403422532/001/cp-test_multinode-404116-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116:/home/docker/cp-test_multinode-404116-m03_multinode-404116.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116 sudo cat                                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m03_multinode-404116.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02:/home/docker/cp-test_multinode-404116-m03_multinode-404116-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116-m02 sudo cat                                   | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m03_multinode-404116-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-404116 node stop m03                                                          | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	| node    | multinode-404116 node start                                                             | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-404116                                                                | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:20 UTC |                     |
	| stop    | -p multinode-404116                                                                     | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:20 UTC |                     |
	| start   | -p multinode-404116                                                                     | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:22 UTC | 29 Apr 24 13:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-404116                                                                | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 13:22:10
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 13:22:10.380725  888828 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:22:10.381066  888828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:22:10.381079  888828 out.go:304] Setting ErrFile to fd 2...
	I0429 13:22:10.381084  888828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:22:10.381326  888828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:22:10.382004  888828 out.go:298] Setting JSON to false
	I0429 13:22:10.383190  888828 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":79475,"bootTime":1714317455,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:22:10.383278  888828 start.go:139] virtualization: kvm guest
	I0429 13:22:10.386372  888828 out.go:177] * [multinode-404116] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:22:10.388157  888828 notify.go:220] Checking for updates...
	I0429 13:22:10.388189  888828 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:22:10.389891  888828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:22:10.391812  888828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:22:10.393406  888828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:22:10.395092  888828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:22:10.396771  888828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:22:10.398911  888828 config.go:182] Loaded profile config "multinode-404116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:22:10.399065  888828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:22:10.399599  888828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:22:10.399695  888828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:22:10.416696  888828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0429 13:22:10.417273  888828 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:22:10.417860  888828 main.go:141] libmachine: Using API Version  1
	I0429 13:22:10.417884  888828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:22:10.418354  888828 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:22:10.418666  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:22:10.459291  888828 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 13:22:10.460654  888828 start.go:297] selected driver: kvm2
	I0429 13:22:10.460680  888828 start.go:901] validating driver "kvm2" against &{Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:22:10.460853  888828 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:22:10.461250  888828 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:22:10.461336  888828 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 13:22:10.478115  888828 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 13:22:10.478857  888828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:22:10.478922  888828 cni.go:84] Creating CNI manager for ""
	I0429 13:22:10.478935  888828 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:22:10.478999  888828 start.go:340] cluster config:
	{Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-404116 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:22:10.479143  888828 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:22:10.481532  888828 out.go:177] * Starting "multinode-404116" primary control-plane node in "multinode-404116" cluster
	I0429 13:22:10.482872  888828 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:22:10.482930  888828 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 13:22:10.482944  888828 cache.go:56] Caching tarball of preloaded images
	I0429 13:22:10.483046  888828 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 13:22:10.483062  888828 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 13:22:10.483201  888828 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/config.json ...
	I0429 13:22:10.483454  888828 start.go:360] acquireMachinesLock for multinode-404116: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:22:10.483509  888828 start.go:364] duration metric: took 30.373µs to acquireMachinesLock for "multinode-404116"
	I0429 13:22:10.483530  888828 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:22:10.483539  888828 fix.go:54] fixHost starting: 
	I0429 13:22:10.483860  888828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:22:10.483908  888828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:22:10.499777  888828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0429 13:22:10.500287  888828 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:22:10.500874  888828 main.go:141] libmachine: Using API Version  1
	I0429 13:22:10.500903  888828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:22:10.501266  888828 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:22:10.501459  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:22:10.501591  888828 main.go:141] libmachine: (multinode-404116) Calling .GetState
	I0429 13:22:10.503527  888828 fix.go:112] recreateIfNeeded on multinode-404116: state=Running err=<nil>
	W0429 13:22:10.503567  888828 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:22:10.506029  888828 out.go:177] * Updating the running kvm2 "multinode-404116" VM ...
	I0429 13:22:10.507484  888828 machine.go:94] provisionDockerMachine start ...
	I0429 13:22:10.507519  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:22:10.507855  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.511413  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.512028  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.512081  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.512229  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:10.512484  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.512659  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.512863  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:10.513050  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:10.513327  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:10.513349  888828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 13:22:10.633684  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-404116
	
	I0429 13:22:10.633737  888828 main.go:141] libmachine: (multinode-404116) Calling .GetMachineName
	I0429 13:22:10.634098  888828 buildroot.go:166] provisioning hostname "multinode-404116"
	I0429 13:22:10.634136  888828 main.go:141] libmachine: (multinode-404116) Calling .GetMachineName
	I0429 13:22:10.634330  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.637950  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.638523  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.638574  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.638858  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:10.639140  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.639410  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.639590  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:10.639810  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:10.640068  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:10.640089  888828 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-404116 && echo "multinode-404116" | sudo tee /etc/hostname
	I0429 13:22:10.781571  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-404116
	
	I0429 13:22:10.781607  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.784810  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.785338  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.785377  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.785645  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:10.785903  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.786091  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.786249  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:10.786414  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:10.786618  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:10.786642  888828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-404116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-404116/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-404116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:22:10.901158  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:22:10.901199  888828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:22:10.901272  888828 buildroot.go:174] setting up certificates
	I0429 13:22:10.901290  888828 provision.go:84] configureAuth start
	I0429 13:22:10.901308  888828 main.go:141] libmachine: (multinode-404116) Calling .GetMachineName
	I0429 13:22:10.901618  888828 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:22:10.904659  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.905121  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.905149  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.905327  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.907898  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.908288  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.908318  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.908486  888828 provision.go:143] copyHostCerts
	I0429 13:22:10.908560  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:22:10.908607  888828 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:22:10.908621  888828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:22:10.908721  888828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:22:10.908857  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:22:10.908900  888828 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:22:10.908910  888828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:22:10.908956  888828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:22:10.909024  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:22:10.909050  888828 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:22:10.909060  888828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:22:10.909155  888828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:22:10.909244  888828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.multinode-404116 san=[127.0.0.1 192.168.39.179 localhost minikube multinode-404116]
	I0429 13:22:11.047075  888828 provision.go:177] copyRemoteCerts
	I0429 13:22:11.047144  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:22:11.047172  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:11.050212  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.050581  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:11.050610  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.050798  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:11.051010  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:11.051284  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:11.051494  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:22:11.140649  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 13:22:11.140749  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:22:11.170840  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 13:22:11.170928  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 13:22:11.200391  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 13:22:11.200479  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:22:11.228660  888828 provision.go:87] duration metric: took 327.355422ms to configureAuth
	I0429 13:22:11.228693  888828 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:22:11.228932  888828 config.go:182] Loaded profile config "multinode-404116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:22:11.229014  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:11.231687  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.232124  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:11.232158  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.232364  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:11.232588  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:11.232771  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:11.232977  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:11.233194  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:11.233416  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:11.233433  888828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:23:42.022158  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:23:42.022205  888828 machine.go:97] duration metric: took 1m31.514702644s to provisionDockerMachine
	I0429 13:23:42.022221  888828 start.go:293] postStartSetup for "multinode-404116" (driver="kvm2")
	I0429 13:23:42.022241  888828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:23:42.022281  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.022640  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:23:42.022681  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.026455  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.026998  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.027036  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.027225  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.027517  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.027747  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.028029  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:23:42.121476  888828 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:23:42.126207  888828 command_runner.go:130] > NAME=Buildroot
	I0429 13:23:42.126238  888828 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 13:23:42.126245  888828 command_runner.go:130] > ID=buildroot
	I0429 13:23:42.126252  888828 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 13:23:42.126259  888828 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 13:23:42.126312  888828 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:23:42.126331  888828 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:23:42.126438  888828 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:23:42.126557  888828 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:23:42.126575  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 13:23:42.126719  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:23:42.137939  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:23:42.165261  888828 start.go:296] duration metric: took 143.022946ms for postStartSetup
	I0429 13:23:42.165344  888828 fix.go:56] duration metric: took 1m31.681804102s for fixHost
	I0429 13:23:42.165373  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.168466  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.168925  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.168969  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.169204  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.169462  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.169622  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.169792  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.170000  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:23:42.170185  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:23:42.170196  888828 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:23:42.285106  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714397022.261077798
	
	I0429 13:23:42.285138  888828 fix.go:216] guest clock: 1714397022.261077798
	I0429 13:23:42.285146  888828 fix.go:229] Guest: 2024-04-29 13:23:42.261077798 +0000 UTC Remote: 2024-04-29 13:23:42.165351568 +0000 UTC m=+91.843247629 (delta=95.72623ms)
	I0429 13:23:42.285191  888828 fix.go:200] guest clock delta is within tolerance: 95.72623ms
	I0429 13:23:42.285199  888828 start.go:83] releasing machines lock for "multinode-404116", held for 1m31.801677231s
	I0429 13:23:42.285228  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.285612  888828 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:23:42.289313  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.289813  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.289845  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.290111  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.290920  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.291205  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.291333  888828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:23:42.291387  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.291541  888828 ssh_runner.go:195] Run: cat /version.json
	I0429 13:23:42.291575  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.295256  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.295289  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.295826  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.295873  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.295902  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.295921  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.296090  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.296110  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.296357  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.296357  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.296545  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.296548  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.296832  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:23:42.296952  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:23:42.402890  888828 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 13:23:42.402953  888828 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 13:23:42.403144  888828 ssh_runner.go:195] Run: systemctl --version
	I0429 13:23:42.409843  888828 command_runner.go:130] > systemd 252 (252)
	I0429 13:23:42.409890  888828 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 13:23:42.409993  888828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:23:42.580166  888828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 13:23:42.587032  888828 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 13:23:42.587106  888828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:23:42.587173  888828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:23:42.598571  888828 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 13:23:42.598615  888828 start.go:494] detecting cgroup driver to use...
	I0429 13:23:42.598704  888828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:23:42.617547  888828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:23:42.633809  888828 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:23:42.633898  888828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:23:42.650294  888828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:23:42.666463  888828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:23:42.820781  888828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:23:42.970140  888828 docker.go:233] disabling docker service ...
	I0429 13:23:42.970228  888828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:23:42.989092  888828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:23:43.005789  888828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:23:43.159499  888828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:23:43.312155  888828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:23:43.328715  888828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:23:43.350301  888828 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0429 13:23:43.350354  888828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:23:43.350403  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.363505  888828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:23:43.363595  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.376955  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.389807  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.403459  888828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:23:43.416706  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.429463  888828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.441678  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.454378  888828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:23:43.465805  888828 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 13:23:43.465920  888828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:23:43.478300  888828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:23:43.629388  888828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:23:43.903950  888828 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:23:43.904050  888828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:23:43.909278  888828 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0429 13:23:43.909317  888828 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 13:23:43.909327  888828 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0429 13:23:43.909335  888828 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:23:43.909340  888828 command_runner.go:130] > Access: 2024-04-29 13:23:43.750171900 +0000
	I0429 13:23:43.909347  888828 command_runner.go:130] > Modify: 2024-04-29 13:23:43.750171900 +0000
	I0429 13:23:43.909352  888828 command_runner.go:130] > Change: 2024-04-29 13:23:43.750171900 +0000
	I0429 13:23:43.909356  888828 command_runner.go:130] >  Birth: -
	I0429 13:23:43.909404  888828 start.go:562] Will wait 60s for crictl version
	I0429 13:23:43.909460  888828 ssh_runner.go:195] Run: which crictl
	I0429 13:23:43.919768  888828 command_runner.go:130] > /usr/bin/crictl
	I0429 13:23:43.919903  888828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:23:43.966574  888828 command_runner.go:130] > Version:  0.1.0
	I0429 13:23:43.966609  888828 command_runner.go:130] > RuntimeName:  cri-o
	I0429 13:23:43.966614  888828 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0429 13:23:43.966620  888828 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 13:23:43.966646  888828 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:23:43.966726  888828 ssh_runner.go:195] Run: crio --version
	I0429 13:23:44.001479  888828 command_runner.go:130] > crio version 1.29.1
	I0429 13:23:44.001513  888828 command_runner.go:130] > Version:        1.29.1
	I0429 13:23:44.001522  888828 command_runner.go:130] > GitCommit:      unknown
	I0429 13:23:44.001530  888828 command_runner.go:130] > GitCommitDate:  unknown
	I0429 13:23:44.001536  888828 command_runner.go:130] > GitTreeState:   clean
	I0429 13:23:44.001544  888828 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 13:23:44.001551  888828 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 13:23:44.001558  888828 command_runner.go:130] > Compiler:       gc
	I0429 13:23:44.001565  888828 command_runner.go:130] > Platform:       linux/amd64
	I0429 13:23:44.001570  888828 command_runner.go:130] > Linkmode:       dynamic
	I0429 13:23:44.001576  888828 command_runner.go:130] > BuildTags:      
	I0429 13:23:44.001589  888828 command_runner.go:130] >   containers_image_ostree_stub
	I0429 13:23:44.001603  888828 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 13:23:44.001613  888828 command_runner.go:130] >   btrfs_noversion
	I0429 13:23:44.001621  888828 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 13:23:44.001629  888828 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 13:23:44.001637  888828 command_runner.go:130] >   seccomp
	I0429 13:23:44.001645  888828 command_runner.go:130] > LDFlags:          unknown
	I0429 13:23:44.001656  888828 command_runner.go:130] > SeccompEnabled:   true
	I0429 13:23:44.001664  888828 command_runner.go:130] > AppArmorEnabled:  false
	I0429 13:23:44.003129  888828 ssh_runner.go:195] Run: crio --version
	I0429 13:23:44.036535  888828 command_runner.go:130] > crio version 1.29.1
	I0429 13:23:44.036561  888828 command_runner.go:130] > Version:        1.29.1
	I0429 13:23:44.036566  888828 command_runner.go:130] > GitCommit:      unknown
	I0429 13:23:44.036571  888828 command_runner.go:130] > GitCommitDate:  unknown
	I0429 13:23:44.036581  888828 command_runner.go:130] > GitTreeState:   clean
	I0429 13:23:44.036587  888828 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 13:23:44.036591  888828 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 13:23:44.036595  888828 command_runner.go:130] > Compiler:       gc
	I0429 13:23:44.036599  888828 command_runner.go:130] > Platform:       linux/amd64
	I0429 13:23:44.036604  888828 command_runner.go:130] > Linkmode:       dynamic
	I0429 13:23:44.036609  888828 command_runner.go:130] > BuildTags:      
	I0429 13:23:44.036613  888828 command_runner.go:130] >   containers_image_ostree_stub
	I0429 13:23:44.036617  888828 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 13:23:44.036621  888828 command_runner.go:130] >   btrfs_noversion
	I0429 13:23:44.036625  888828 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 13:23:44.036632  888828 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 13:23:44.036635  888828 command_runner.go:130] >   seccomp
	I0429 13:23:44.036639  888828 command_runner.go:130] > LDFlags:          unknown
	I0429 13:23:44.036643  888828 command_runner.go:130] > SeccompEnabled:   true
	I0429 13:23:44.036647  888828 command_runner.go:130] > AppArmorEnabled:  false
	I0429 13:23:44.038966  888828 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:23:44.040428  888828 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:23:44.043662  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:44.044090  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:44.044130  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:44.044360  888828 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 13:23:44.049246  888828 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0429 13:23:44.049421  888828 kubeadm.go:877] updating cluster {Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:23:44.049578  888828 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:23:44.049654  888828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:23:44.098284  888828 command_runner.go:130] > {
	I0429 13:23:44.098311  888828 command_runner.go:130] >   "images": [
	I0429 13:23:44.098315  888828 command_runner.go:130] >     {
	I0429 13:23:44.098323  888828 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 13:23:44.098329  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098334  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 13:23:44.098339  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098343  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098367  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 13:23:44.098378  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 13:23:44.098386  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098392  888828 command_runner.go:130] >       "size": "65291810",
	I0429 13:23:44.098398  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098408  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098420  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098430  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098434  888828 command_runner.go:130] >     },
	I0429 13:23:44.098438  888828 command_runner.go:130] >     {
	I0429 13:23:44.098446  888828 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 13:23:44.098450  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098458  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 13:23:44.098462  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098467  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098475  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 13:23:44.098482  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 13:23:44.098491  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098497  888828 command_runner.go:130] >       "size": "1363676",
	I0429 13:23:44.098507  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098520  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098530  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098536  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098543  888828 command_runner.go:130] >     },
	I0429 13:23:44.098546  888828 command_runner.go:130] >     {
	I0429 13:23:44.098553  888828 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 13:23:44.098559  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098564  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 13:23:44.098570  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098574  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098581  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 13:23:44.098598  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 13:23:44.098611  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098622  888828 command_runner.go:130] >       "size": "31470524",
	I0429 13:23:44.098631  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098641  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098655  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098662  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098665  888828 command_runner.go:130] >     },
	I0429 13:23:44.098669  888828 command_runner.go:130] >     {
	I0429 13:23:44.098677  888828 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 13:23:44.098684  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098691  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 13:23:44.098701  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098710  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098725  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 13:23:44.098750  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 13:23:44.098758  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098766  888828 command_runner.go:130] >       "size": "61245718",
	I0429 13:23:44.098770  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098777  888828 command_runner.go:130] >       "username": "nonroot",
	I0429 13:23:44.098782  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098791  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098800  888828 command_runner.go:130] >     },
	I0429 13:23:44.098807  888828 command_runner.go:130] >     {
	I0429 13:23:44.098821  888828 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 13:23:44.098831  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098842  888828 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 13:23:44.098851  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098860  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098873  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 13:23:44.098884  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 13:23:44.098893  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098902  888828 command_runner.go:130] >       "size": "150779692",
	I0429 13:23:44.098910  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.098921  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.098930  888828 command_runner.go:130] >       },
	I0429 13:23:44.098939  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098948  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098962  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098970  888828 command_runner.go:130] >     },
	I0429 13:23:44.098977  888828 command_runner.go:130] >     {
	I0429 13:23:44.098985  888828 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 13:23:44.098994  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099006  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 13:23:44.099015  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099022  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099038  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 13:23:44.099053  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 13:23:44.099061  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099069  888828 command_runner.go:130] >       "size": "117609952",
	I0429 13:23:44.099073  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099082  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.099091  888828 command_runner.go:130] >       },
	I0429 13:23:44.099099  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099108  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099118  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099126  888828 command_runner.go:130] >     },
	I0429 13:23:44.099133  888828 command_runner.go:130] >     {
	I0429 13:23:44.099143  888828 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 13:23:44.099153  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099162  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 13:23:44.099167  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099175  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099189  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 13:23:44.099205  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 13:23:44.099212  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099220  888828 command_runner.go:130] >       "size": "112170310",
	I0429 13:23:44.099229  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099235  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.099243  888828 command_runner.go:130] >       },
	I0429 13:23:44.099250  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099260  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099267  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099274  888828 command_runner.go:130] >     },
	I0429 13:23:44.099277  888828 command_runner.go:130] >     {
	I0429 13:23:44.099290  888828 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 13:23:44.099299  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099310  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 13:23:44.099319  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099330  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099376  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 13:23:44.099393  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 13:23:44.099403  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099411  888828 command_runner.go:130] >       "size": "85932953",
	I0429 13:23:44.099420  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.099429  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099441  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099448  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099454  888828 command_runner.go:130] >     },
	I0429 13:23:44.099458  888828 command_runner.go:130] >     {
	I0429 13:23:44.099464  888828 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 13:23:44.099470  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099479  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 13:23:44.099485  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099491  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099503  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 13:23:44.099514  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 13:23:44.099519  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099526  888828 command_runner.go:130] >       "size": "63026502",
	I0429 13:23:44.099531  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099537  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.099542  888828 command_runner.go:130] >       },
	I0429 13:23:44.099546  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099550  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099555  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099561  888828 command_runner.go:130] >     },
	I0429 13:23:44.099566  888828 command_runner.go:130] >     {
	I0429 13:23:44.099576  888828 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 13:23:44.099583  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099590  888828 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 13:23:44.099596  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099602  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099617  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 13:23:44.099635  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 13:23:44.099643  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099654  888828 command_runner.go:130] >       "size": "750414",
	I0429 13:23:44.099663  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099670  888828 command_runner.go:130] >         "value": "65535"
	I0429 13:23:44.099679  888828 command_runner.go:130] >       },
	I0429 13:23:44.099685  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099696  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099706  888828 command_runner.go:130] >       "pinned": true
	I0429 13:23:44.099715  888828 command_runner.go:130] >     }
	I0429 13:23:44.099723  888828 command_runner.go:130] >   ]
	I0429 13:23:44.099733  888828 command_runner.go:130] > }
	I0429 13:23:44.099986  888828 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:23:44.100002  888828 crio.go:433] Images already preloaded, skipping extraction
	I0429 13:23:44.100076  888828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:23:44.138064  888828 command_runner.go:130] > {
	I0429 13:23:44.138096  888828 command_runner.go:130] >   "images": [
	I0429 13:23:44.138101  888828 command_runner.go:130] >     {
	I0429 13:23:44.138135  888828 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 13:23:44.138142  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138150  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 13:23:44.138154  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138160  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138172  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 13:23:44.138187  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 13:23:44.138194  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138202  888828 command_runner.go:130] >       "size": "65291810",
	I0429 13:23:44.138212  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138220  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138234  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138242  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138249  888828 command_runner.go:130] >     },
	I0429 13:23:44.138255  888828 command_runner.go:130] >     {
	I0429 13:23:44.138273  888828 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 13:23:44.138283  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138292  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 13:23:44.138301  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138309  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138326  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 13:23:44.138342  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 13:23:44.138350  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138358  888828 command_runner.go:130] >       "size": "1363676",
	I0429 13:23:44.138368  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138389  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138399  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138408  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138416  888828 command_runner.go:130] >     },
	I0429 13:23:44.138423  888828 command_runner.go:130] >     {
	I0429 13:23:44.138435  888828 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 13:23:44.138444  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138455  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 13:23:44.138468  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138479  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138493  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 13:23:44.138510  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 13:23:44.138519  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138527  888828 command_runner.go:130] >       "size": "31470524",
	I0429 13:23:44.138536  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138544  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138553  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138562  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138571  888828 command_runner.go:130] >     },
	I0429 13:23:44.138577  888828 command_runner.go:130] >     {
	I0429 13:23:44.138591  888828 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 13:23:44.138604  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138615  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 13:23:44.138624  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138632  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138649  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 13:23:44.138701  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 13:23:44.138716  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138722  888828 command_runner.go:130] >       "size": "61245718",
	I0429 13:23:44.138729  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138736  888828 command_runner.go:130] >       "username": "nonroot",
	I0429 13:23:44.138746  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138754  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138763  888828 command_runner.go:130] >     },
	I0429 13:23:44.138770  888828 command_runner.go:130] >     {
	I0429 13:23:44.138785  888828 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 13:23:44.138795  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138806  888828 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 13:23:44.138814  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138821  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138837  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 13:23:44.138851  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 13:23:44.138860  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138868  888828 command_runner.go:130] >       "size": "150779692",
	I0429 13:23:44.138878  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.138886  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.138895  888828 command_runner.go:130] >       },
	I0429 13:23:44.138902  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138911  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138918  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138927  888828 command_runner.go:130] >     },
	I0429 13:23:44.138934  888828 command_runner.go:130] >     {
	I0429 13:23:44.138948  888828 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 13:23:44.138960  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138972  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 13:23:44.138982  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138991  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139018  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 13:23:44.139034  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 13:23:44.139043  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139050  888828 command_runner.go:130] >       "size": "117609952",
	I0429 13:23:44.139059  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139067  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.139074  888828 command_runner.go:130] >       },
	I0429 13:23:44.139082  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139092  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139102  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139108  888828 command_runner.go:130] >     },
	I0429 13:23:44.139117  888828 command_runner.go:130] >     {
	I0429 13:23:44.139129  888828 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 13:23:44.139138  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139148  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 13:23:44.139156  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139164  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139180  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 13:23:44.139196  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 13:23:44.139205  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139213  888828 command_runner.go:130] >       "size": "112170310",
	I0429 13:23:44.139223  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139230  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.139239  888828 command_runner.go:130] >       },
	I0429 13:23:44.139246  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139256  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139263  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139270  888828 command_runner.go:130] >     },
	I0429 13:23:44.139278  888828 command_runner.go:130] >     {
	I0429 13:23:44.139289  888828 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 13:23:44.139299  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139310  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 13:23:44.139321  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139331  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139354  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 13:23:44.139384  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 13:23:44.139393  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139403  888828 command_runner.go:130] >       "size": "85932953",
	I0429 13:23:44.139411  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.139419  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139429  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139438  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139446  888828 command_runner.go:130] >     },
	I0429 13:23:44.139452  888828 command_runner.go:130] >     {
	I0429 13:23:44.139464  888828 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 13:23:44.139472  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139481  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 13:23:44.139490  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139498  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139514  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 13:23:44.139530  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 13:23:44.139539  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139547  888828 command_runner.go:130] >       "size": "63026502",
	I0429 13:23:44.139555  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139563  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.139572  888828 command_runner.go:130] >       },
	I0429 13:23:44.139580  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139589  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139596  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139604  888828 command_runner.go:130] >     },
	I0429 13:23:44.139610  888828 command_runner.go:130] >     {
	I0429 13:23:44.139624  888828 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 13:23:44.139633  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139641  888828 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 13:23:44.139651  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139658  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139672  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 13:23:44.139688  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 13:23:44.139704  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139715  888828 command_runner.go:130] >       "size": "750414",
	I0429 13:23:44.139723  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139733  888828 command_runner.go:130] >         "value": "65535"
	I0429 13:23:44.139742  888828 command_runner.go:130] >       },
	I0429 13:23:44.139751  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139762  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139771  888828 command_runner.go:130] >       "pinned": true
	I0429 13:23:44.139778  888828 command_runner.go:130] >     }
	I0429 13:23:44.139787  888828 command_runner.go:130] >   ]
	I0429 13:23:44.139793  888828 command_runner.go:130] > }
	I0429 13:23:44.139940  888828 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:23:44.139956  888828 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:23:44.139989  888828 kubeadm.go:928] updating node { 192.168.39.179 8443 v1.30.0 crio true true} ...
	I0429 13:23:44.140150  888828 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-404116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:23:44.140252  888828 ssh_runner.go:195] Run: crio config
	I0429 13:23:44.181854  888828 command_runner.go:130] ! time="2024-04-29 13:23:44.158094108Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0429 13:23:44.188904  888828 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0429 13:23:44.203601  888828 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0429 13:23:44.203629  888828 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0429 13:23:44.203635  888828 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0429 13:23:44.203639  888828 command_runner.go:130] > #
	I0429 13:23:44.203645  888828 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0429 13:23:44.203651  888828 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0429 13:23:44.203657  888828 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0429 13:23:44.203664  888828 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0429 13:23:44.203667  888828 command_runner.go:130] > # reload'.
	I0429 13:23:44.203673  888828 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0429 13:23:44.203679  888828 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0429 13:23:44.203685  888828 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0429 13:23:44.203693  888828 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0429 13:23:44.203698  888828 command_runner.go:130] > [crio]
	I0429 13:23:44.203703  888828 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0429 13:23:44.203710  888828 command_runner.go:130] > # containers images, in this directory.
	I0429 13:23:44.203715  888828 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0429 13:23:44.203725  888828 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0429 13:23:44.203736  888828 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0429 13:23:44.203746  888828 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0429 13:23:44.203750  888828 command_runner.go:130] > # imagestore = ""
	I0429 13:23:44.203757  888828 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0429 13:23:44.203766  888828 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0429 13:23:44.203770  888828 command_runner.go:130] > storage_driver = "overlay"
	I0429 13:23:44.203778  888828 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0429 13:23:44.203787  888828 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0429 13:23:44.203792  888828 command_runner.go:130] > storage_option = [
	I0429 13:23:44.203796  888828 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0429 13:23:44.203802  888828 command_runner.go:130] > ]
	I0429 13:23:44.203809  888828 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0429 13:23:44.203816  888828 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0429 13:23:44.203823  888828 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0429 13:23:44.203829  888828 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0429 13:23:44.203837  888828 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0429 13:23:44.203844  888828 command_runner.go:130] > # always happen on a node reboot
	I0429 13:23:44.203849  888828 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0429 13:23:44.203866  888828 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0429 13:23:44.203874  888828 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0429 13:23:44.203880  888828 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0429 13:23:44.203887  888828 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0429 13:23:44.203895  888828 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0429 13:23:44.203905  888828 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0429 13:23:44.203911  888828 command_runner.go:130] > # internal_wipe = true
	I0429 13:23:44.203919  888828 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0429 13:23:44.203926  888828 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0429 13:23:44.203930  888828 command_runner.go:130] > # internal_repair = false
	I0429 13:23:44.203938  888828 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0429 13:23:44.203955  888828 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0429 13:23:44.203963  888828 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0429 13:23:44.203970  888828 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0429 13:23:44.203976  888828 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0429 13:23:44.203982  888828 command_runner.go:130] > [crio.api]
	I0429 13:23:44.203994  888828 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0429 13:23:44.204001  888828 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0429 13:23:44.204011  888828 command_runner.go:130] > # IP address on which the stream server will listen.
	I0429 13:23:44.204018  888828 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0429 13:23:44.204025  888828 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0429 13:23:44.204033  888828 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0429 13:23:44.204040  888828 command_runner.go:130] > # stream_port = "0"
	I0429 13:23:44.204045  888828 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0429 13:23:44.204052  888828 command_runner.go:130] > # stream_enable_tls = false
	I0429 13:23:44.204058  888828 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0429 13:23:44.204064  888828 command_runner.go:130] > # stream_idle_timeout = ""
	I0429 13:23:44.204071  888828 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0429 13:23:44.204079  888828 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0429 13:23:44.204085  888828 command_runner.go:130] > # minutes.
	I0429 13:23:44.204089  888828 command_runner.go:130] > # stream_tls_cert = ""
	I0429 13:23:44.204097  888828 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0429 13:23:44.204105  888828 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0429 13:23:44.204111  888828 command_runner.go:130] > # stream_tls_key = ""
	I0429 13:23:44.204117  888828 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0429 13:23:44.204125  888828 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0429 13:23:44.204148  888828 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0429 13:23:44.204154  888828 command_runner.go:130] > # stream_tls_ca = ""
	I0429 13:23:44.204162  888828 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 13:23:44.204169  888828 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0429 13:23:44.204176  888828 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 13:23:44.204183  888828 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0429 13:23:44.204188  888828 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0429 13:23:44.204196  888828 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0429 13:23:44.204202  888828 command_runner.go:130] > [crio.runtime]
	I0429 13:23:44.204208  888828 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0429 13:23:44.204216  888828 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0429 13:23:44.204221  888828 command_runner.go:130] > # "nofile=1024:2048"
	I0429 13:23:44.204227  888828 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0429 13:23:44.204233  888828 command_runner.go:130] > # default_ulimits = [
	I0429 13:23:44.204236  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204244  888828 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0429 13:23:44.204248  888828 command_runner.go:130] > # no_pivot = false
	I0429 13:23:44.204256  888828 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0429 13:23:44.204268  888828 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0429 13:23:44.204275  888828 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0429 13:23:44.204280  888828 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0429 13:23:44.204287  888828 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0429 13:23:44.204294  888828 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 13:23:44.204300  888828 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0429 13:23:44.204304  888828 command_runner.go:130] > # Cgroup setting for conmon
	I0429 13:23:44.204313  888828 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0429 13:23:44.204319  888828 command_runner.go:130] > conmon_cgroup = "pod"
	I0429 13:23:44.204325  888828 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0429 13:23:44.204332  888828 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0429 13:23:44.204339  888828 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 13:23:44.204345  888828 command_runner.go:130] > conmon_env = [
	I0429 13:23:44.204351  888828 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 13:23:44.204356  888828 command_runner.go:130] > ]
	I0429 13:23:44.204361  888828 command_runner.go:130] > # Additional environment variables to set for all the
	I0429 13:23:44.204368  888828 command_runner.go:130] > # containers. These are overridden if set in the
	I0429 13:23:44.204374  888828 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0429 13:23:44.204380  888828 command_runner.go:130] > # default_env = [
	I0429 13:23:44.204383  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204391  888828 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0429 13:23:44.204400  888828 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0429 13:23:44.204406  888828 command_runner.go:130] > # selinux = false
	I0429 13:23:44.204412  888828 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0429 13:23:44.204421  888828 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0429 13:23:44.204427  888828 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0429 13:23:44.204433  888828 command_runner.go:130] > # seccomp_profile = ""
	I0429 13:23:44.204438  888828 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0429 13:23:44.204446  888828 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0429 13:23:44.204455  888828 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0429 13:23:44.204461  888828 command_runner.go:130] > # which might increase security.
	I0429 13:23:44.204466  888828 command_runner.go:130] > # This option is currently deprecated,
	I0429 13:23:44.204474  888828 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0429 13:23:44.204481  888828 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0429 13:23:44.204487  888828 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0429 13:23:44.204495  888828 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0429 13:23:44.204507  888828 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0429 13:23:44.204516  888828 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0429 13:23:44.204522  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.204528  888828 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0429 13:23:44.204534  888828 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0429 13:23:44.204540  888828 command_runner.go:130] > # the cgroup blockio controller.
	I0429 13:23:44.204545  888828 command_runner.go:130] > # blockio_config_file = ""
	I0429 13:23:44.204553  888828 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0429 13:23:44.204559  888828 command_runner.go:130] > # blockio parameters.
	I0429 13:23:44.204563  888828 command_runner.go:130] > # blockio_reload = false
	I0429 13:23:44.204572  888828 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0429 13:23:44.204578  888828 command_runner.go:130] > # irqbalance daemon.
	I0429 13:23:44.204583  888828 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0429 13:23:44.204591  888828 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0429 13:23:44.204600  888828 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0429 13:23:44.204610  888828 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0429 13:23:44.204617  888828 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0429 13:23:44.204626  888828 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0429 13:23:44.204633  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.204637  888828 command_runner.go:130] > # rdt_config_file = ""
	I0429 13:23:44.204645  888828 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0429 13:23:44.204650  888828 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0429 13:23:44.204681  888828 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0429 13:23:44.204688  888828 command_runner.go:130] > # separate_pull_cgroup = ""
	I0429 13:23:44.204694  888828 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0429 13:23:44.204702  888828 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0429 13:23:44.204706  888828 command_runner.go:130] > # will be added.
	I0429 13:23:44.204711  888828 command_runner.go:130] > # default_capabilities = [
	I0429 13:23:44.204717  888828 command_runner.go:130] > # 	"CHOWN",
	I0429 13:23:44.204721  888828 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0429 13:23:44.204727  888828 command_runner.go:130] > # 	"FSETID",
	I0429 13:23:44.204730  888828 command_runner.go:130] > # 	"FOWNER",
	I0429 13:23:44.204736  888828 command_runner.go:130] > # 	"SETGID",
	I0429 13:23:44.204740  888828 command_runner.go:130] > # 	"SETUID",
	I0429 13:23:44.204746  888828 command_runner.go:130] > # 	"SETPCAP",
	I0429 13:23:44.204750  888828 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0429 13:23:44.204761  888828 command_runner.go:130] > # 	"KILL",
	I0429 13:23:44.204767  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204775  888828 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0429 13:23:44.204783  888828 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0429 13:23:44.204788  888828 command_runner.go:130] > # add_inheritable_capabilities = false
	I0429 13:23:44.204796  888828 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0429 13:23:44.204804  888828 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 13:23:44.204811  888828 command_runner.go:130] > default_sysctls = [
	I0429 13:23:44.204816  888828 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0429 13:23:44.204821  888828 command_runner.go:130] > ]
	I0429 13:23:44.204825  888828 command_runner.go:130] > # List of devices on the host that a
	I0429 13:23:44.204833  888828 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0429 13:23:44.204840  888828 command_runner.go:130] > # allowed_devices = [
	I0429 13:23:44.204843  888828 command_runner.go:130] > # 	"/dev/fuse",
	I0429 13:23:44.204848  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204854  888828 command_runner.go:130] > # List of additional devices. specified as
	I0429 13:23:44.204863  888828 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0429 13:23:44.204871  888828 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0429 13:23:44.204876  888828 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 13:23:44.204882  888828 command_runner.go:130] > # additional_devices = [
	I0429 13:23:44.204886  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204893  888828 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0429 13:23:44.204897  888828 command_runner.go:130] > # cdi_spec_dirs = [
	I0429 13:23:44.204903  888828 command_runner.go:130] > # 	"/etc/cdi",
	I0429 13:23:44.204906  888828 command_runner.go:130] > # 	"/var/run/cdi",
	I0429 13:23:44.204910  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204916  888828 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0429 13:23:44.204924  888828 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0429 13:23:44.204931  888828 command_runner.go:130] > # Defaults to false.
	I0429 13:23:44.204936  888828 command_runner.go:130] > # device_ownership_from_security_context = false
	I0429 13:23:44.204948  888828 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0429 13:23:44.204956  888828 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0429 13:23:44.204962  888828 command_runner.go:130] > # hooks_dir = [
	I0429 13:23:44.204966  888828 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0429 13:23:44.204970  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204976  888828 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0429 13:23:44.204990  888828 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0429 13:23:44.204997  888828 command_runner.go:130] > # its default mounts from the following two files:
	I0429 13:23:44.205003  888828 command_runner.go:130] > #
	I0429 13:23:44.205009  888828 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0429 13:23:44.205017  888828 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0429 13:23:44.205023  888828 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0429 13:23:44.205026  888828 command_runner.go:130] > #
	I0429 13:23:44.205031  888828 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0429 13:23:44.205039  888828 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0429 13:23:44.205047  888828 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0429 13:23:44.205061  888828 command_runner.go:130] > #      only add mounts it finds in this file.
	I0429 13:23:44.205068  888828 command_runner.go:130] > #
	I0429 13:23:44.205074  888828 command_runner.go:130] > # default_mounts_file = ""
	I0429 13:23:44.205084  888828 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0429 13:23:44.205097  888828 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0429 13:23:44.205106  888828 command_runner.go:130] > pids_limit = 1024
	I0429 13:23:44.205117  888828 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0429 13:23:44.205129  888828 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0429 13:23:44.205142  888828 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0429 13:23:44.205158  888828 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0429 13:23:44.205167  888828 command_runner.go:130] > # log_size_max = -1
	I0429 13:23:44.205181  888828 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0429 13:23:44.205200  888828 command_runner.go:130] > # log_to_journald = false
	I0429 13:23:44.205214  888828 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0429 13:23:44.205225  888828 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0429 13:23:44.205236  888828 command_runner.go:130] > # Path to directory for container attach sockets.
	I0429 13:23:44.205248  888828 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0429 13:23:44.205259  888828 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0429 13:23:44.205269  888828 command_runner.go:130] > # bind_mount_prefix = ""
	I0429 13:23:44.205281  888828 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0429 13:23:44.205290  888828 command_runner.go:130] > # read_only = false
	I0429 13:23:44.205303  888828 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0429 13:23:44.205315  888828 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0429 13:23:44.205326  888828 command_runner.go:130] > # live configuration reload.
	I0429 13:23:44.205334  888828 command_runner.go:130] > # log_level = "info"
	I0429 13:23:44.205343  888828 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0429 13:23:44.205359  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.205368  888828 command_runner.go:130] > # log_filter = ""
	I0429 13:23:44.205379  888828 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0429 13:23:44.205395  888828 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0429 13:23:44.205404  888828 command_runner.go:130] > # separated by comma.
	I0429 13:23:44.205418  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205427  888828 command_runner.go:130] > # uid_mappings = ""
	I0429 13:23:44.205438  888828 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0429 13:23:44.205450  888828 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0429 13:23:44.205460  888828 command_runner.go:130] > # separated by comma.
	I0429 13:23:44.205475  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205484  888828 command_runner.go:130] > # gid_mappings = ""
	I0429 13:23:44.205496  888828 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0429 13:23:44.205510  888828 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 13:23:44.205522  888828 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 13:23:44.205537  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205547  888828 command_runner.go:130] > # minimum_mappable_uid = -1
	I0429 13:23:44.205559  888828 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0429 13:23:44.205572  888828 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 13:23:44.205585  888828 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 13:23:44.205599  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205609  888828 command_runner.go:130] > # minimum_mappable_gid = -1
	I0429 13:23:44.205619  888828 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0429 13:23:44.205631  888828 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0429 13:23:44.205643  888828 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0429 13:23:44.205653  888828 command_runner.go:130] > # ctr_stop_timeout = 30
	I0429 13:23:44.205669  888828 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0429 13:23:44.205682  888828 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0429 13:23:44.205693  888828 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0429 13:23:44.205704  888828 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0429 13:23:44.205710  888828 command_runner.go:130] > drop_infra_ctr = false
	I0429 13:23:44.205722  888828 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0429 13:23:44.205734  888828 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0429 13:23:44.205749  888828 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0429 13:23:44.205758  888828 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0429 13:23:44.205773  888828 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0429 13:23:44.205791  888828 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0429 13:23:44.205803  888828 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0429 13:23:44.205814  888828 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0429 13:23:44.205823  888828 command_runner.go:130] > # shared_cpuset = ""
	I0429 13:23:44.205836  888828 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0429 13:23:44.205847  888828 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0429 13:23:44.205856  888828 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0429 13:23:44.205870  888828 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0429 13:23:44.205884  888828 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0429 13:23:44.205896  888828 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0429 13:23:44.205909  888828 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0429 13:23:44.205919  888828 command_runner.go:130] > # enable_criu_support = false
	I0429 13:23:44.205930  888828 command_runner.go:130] > # Enable/disable the generation of the container,
	I0429 13:23:44.205947  888828 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0429 13:23:44.205956  888828 command_runner.go:130] > # enable_pod_events = false
	I0429 13:23:44.205970  888828 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 13:23:44.205983  888828 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 13:23:44.205995  888828 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0429 13:23:44.206004  888828 command_runner.go:130] > # default_runtime = "runc"
	I0429 13:23:44.206014  888828 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0429 13:23:44.206027  888828 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0429 13:23:44.206042  888828 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0429 13:23:44.206050  888828 command_runner.go:130] > # creation as a file is not desired either.
	I0429 13:23:44.206060  888828 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0429 13:23:44.206068  888828 command_runner.go:130] > # the hostname is being managed dynamically.
	I0429 13:23:44.206072  888828 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0429 13:23:44.206078  888828 command_runner.go:130] > # ]
	I0429 13:23:44.206083  888828 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0429 13:23:44.206092  888828 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0429 13:23:44.206100  888828 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0429 13:23:44.206105  888828 command_runner.go:130] > # Each entry in the table should follow the format:
	I0429 13:23:44.206110  888828 command_runner.go:130] > #
	I0429 13:23:44.206115  888828 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0429 13:23:44.206122  888828 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0429 13:23:44.206175  888828 command_runner.go:130] > # runtime_type = "oci"
	I0429 13:23:44.206184  888828 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0429 13:23:44.206194  888828 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0429 13:23:44.206198  888828 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0429 13:23:44.206205  888828 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0429 13:23:44.206209  888828 command_runner.go:130] > # monitor_env = []
	I0429 13:23:44.206216  888828 command_runner.go:130] > # privileged_without_host_devices = false
	I0429 13:23:44.206220  888828 command_runner.go:130] > # allowed_annotations = []
	I0429 13:23:44.206228  888828 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0429 13:23:44.206234  888828 command_runner.go:130] > # Where:
	I0429 13:23:44.206240  888828 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0429 13:23:44.206248  888828 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0429 13:23:44.206256  888828 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0429 13:23:44.206265  888828 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0429 13:23:44.206271  888828 command_runner.go:130] > #   in $PATH.
	I0429 13:23:44.206277  888828 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0429 13:23:44.206284  888828 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0429 13:23:44.206290  888828 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0429 13:23:44.206296  888828 command_runner.go:130] > #   state.
	I0429 13:23:44.206303  888828 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0429 13:23:44.206311  888828 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0429 13:23:44.206319  888828 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0429 13:23:44.206327  888828 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0429 13:23:44.206334  888828 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0429 13:23:44.206343  888828 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0429 13:23:44.206349  888828 command_runner.go:130] > #   The currently recognized values are:
	I0429 13:23:44.206355  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0429 13:23:44.206365  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0429 13:23:44.206373  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0429 13:23:44.206379  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0429 13:23:44.206389  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0429 13:23:44.206397  888828 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0429 13:23:44.206406  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0429 13:23:44.206412  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0429 13:23:44.206420  888828 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0429 13:23:44.206429  888828 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0429 13:23:44.206434  888828 command_runner.go:130] > #   deprecated option "conmon".
	I0429 13:23:44.206442  888828 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0429 13:23:44.206455  888828 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0429 13:23:44.206464  888828 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0429 13:23:44.206469  888828 command_runner.go:130] > #   should be moved to the container's cgroup
	I0429 13:23:44.206477  888828 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0429 13:23:44.206485  888828 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0429 13:23:44.206491  888828 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0429 13:23:44.206498  888828 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0429 13:23:44.206501  888828 command_runner.go:130] > #
	I0429 13:23:44.206506  888828 command_runner.go:130] > # Using the seccomp notifier feature:
	I0429 13:23:44.206511  888828 command_runner.go:130] > #
	I0429 13:23:44.206517  888828 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0429 13:23:44.206526  888828 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0429 13:23:44.206531  888828 command_runner.go:130] > #
	I0429 13:23:44.206537  888828 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0429 13:23:44.206545  888828 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0429 13:23:44.206549  888828 command_runner.go:130] > #
	I0429 13:23:44.206555  888828 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0429 13:23:44.206560  888828 command_runner.go:130] > # feature.
	I0429 13:23:44.206563  888828 command_runner.go:130] > #
	I0429 13:23:44.206571  888828 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0429 13:23:44.206577  888828 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0429 13:23:44.206585  888828 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0429 13:23:44.206593  888828 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0429 13:23:44.206599  888828 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0429 13:23:44.206605  888828 command_runner.go:130] > #
	I0429 13:23:44.206611  888828 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0429 13:23:44.206619  888828 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0429 13:23:44.206623  888828 command_runner.go:130] > #
	I0429 13:23:44.206629  888828 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0429 13:23:44.206637  888828 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0429 13:23:44.206640  888828 command_runner.go:130] > #
	I0429 13:23:44.206645  888828 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0429 13:23:44.206653  888828 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0429 13:23:44.206657  888828 command_runner.go:130] > # limitation.
	I0429 13:23:44.206663  888828 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0429 13:23:44.206670  888828 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0429 13:23:44.206679  888828 command_runner.go:130] > runtime_type = "oci"
	I0429 13:23:44.206685  888828 command_runner.go:130] > runtime_root = "/run/runc"
	I0429 13:23:44.206690  888828 command_runner.go:130] > runtime_config_path = ""
	I0429 13:23:44.206697  888828 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0429 13:23:44.206701  888828 command_runner.go:130] > monitor_cgroup = "pod"
	I0429 13:23:44.206707  888828 command_runner.go:130] > monitor_exec_cgroup = ""
	I0429 13:23:44.206711  888828 command_runner.go:130] > monitor_env = [
	I0429 13:23:44.206718  888828 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 13:23:44.206724  888828 command_runner.go:130] > ]
	I0429 13:23:44.206729  888828 command_runner.go:130] > privileged_without_host_devices = false
	I0429 13:23:44.206737  888828 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0429 13:23:44.206744  888828 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0429 13:23:44.206754  888828 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0429 13:23:44.206763  888828 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0429 13:23:44.206773  888828 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0429 13:23:44.206780  888828 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0429 13:23:44.206792  888828 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0429 13:23:44.206802  888828 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0429 13:23:44.206810  888828 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0429 13:23:44.206818  888828 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0429 13:23:44.206824  888828 command_runner.go:130] > # Example:
	I0429 13:23:44.206828  888828 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0429 13:23:44.206833  888828 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0429 13:23:44.206841  888828 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0429 13:23:44.206846  888828 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0429 13:23:44.206851  888828 command_runner.go:130] > # cpuset = 0
	I0429 13:23:44.206855  888828 command_runner.go:130] > # cpushares = "0-1"
	I0429 13:23:44.206861  888828 command_runner.go:130] > # Where:
	I0429 13:23:44.206866  888828 command_runner.go:130] > # The workload name is workload-type.
	I0429 13:23:44.206875  888828 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0429 13:23:44.206882  888828 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0429 13:23:44.206888  888828 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0429 13:23:44.206898  888828 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0429 13:23:44.206906  888828 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0429 13:23:44.206914  888828 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0429 13:23:44.206920  888828 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0429 13:23:44.206931  888828 command_runner.go:130] > # Default value is set to true
	I0429 13:23:44.206938  888828 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0429 13:23:44.206950  888828 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0429 13:23:44.206958  888828 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0429 13:23:44.206965  888828 command_runner.go:130] > # Default value is set to 'false'
	I0429 13:23:44.206969  888828 command_runner.go:130] > # disable_hostport_mapping = false
	I0429 13:23:44.206977  888828 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0429 13:23:44.206980  888828 command_runner.go:130] > #
	I0429 13:23:44.206986  888828 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0429 13:23:44.206991  888828 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0429 13:23:44.206997  888828 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0429 13:23:44.207003  888828 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0429 13:23:44.207009  888828 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0429 13:23:44.207013  888828 command_runner.go:130] > [crio.image]
	I0429 13:23:44.207018  888828 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0429 13:23:44.207022  888828 command_runner.go:130] > # default_transport = "docker://"
	I0429 13:23:44.207028  888828 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0429 13:23:44.207034  888828 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0429 13:23:44.207038  888828 command_runner.go:130] > # global_auth_file = ""
	I0429 13:23:44.207043  888828 command_runner.go:130] > # The image used to instantiate infra containers.
	I0429 13:23:44.207052  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.207057  888828 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0429 13:23:44.207063  888828 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0429 13:23:44.207068  888828 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0429 13:23:44.207073  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.207077  888828 command_runner.go:130] > # pause_image_auth_file = ""
	I0429 13:23:44.207082  888828 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0429 13:23:44.207088  888828 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0429 13:23:44.207094  888828 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0429 13:23:44.207100  888828 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0429 13:23:44.207103  888828 command_runner.go:130] > # pause_command = "/pause"
	I0429 13:23:44.207110  888828 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0429 13:23:44.207115  888828 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0429 13:23:44.207121  888828 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0429 13:23:44.207129  888828 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0429 13:23:44.207135  888828 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0429 13:23:44.207146  888828 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0429 13:23:44.207150  888828 command_runner.go:130] > # pinned_images = [
	I0429 13:23:44.207153  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207159  888828 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0429 13:23:44.207165  888828 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0429 13:23:44.207172  888828 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0429 13:23:44.207178  888828 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0429 13:23:44.207183  888828 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0429 13:23:44.207190  888828 command_runner.go:130] > # signature_policy = ""
	I0429 13:23:44.207196  888828 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0429 13:23:44.207205  888828 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0429 13:23:44.207214  888828 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0429 13:23:44.207222  888828 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0429 13:23:44.207229  888828 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0429 13:23:44.207236  888828 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0429 13:23:44.207242  888828 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0429 13:23:44.207251  888828 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0429 13:23:44.207257  888828 command_runner.go:130] > # changing them here.
	I0429 13:23:44.207261  888828 command_runner.go:130] > # insecure_registries = [
	I0429 13:23:44.207267  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207273  888828 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0429 13:23:44.207280  888828 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0429 13:23:44.207289  888828 command_runner.go:130] > # image_volumes = "mkdir"
	I0429 13:23:44.207297  888828 command_runner.go:130] > # Temporary directory to use for storing big files
	I0429 13:23:44.207301  888828 command_runner.go:130] > # big_files_temporary_dir = ""
	I0429 13:23:44.207307  888828 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0429 13:23:44.207313  888828 command_runner.go:130] > # CNI plugins.
	I0429 13:23:44.207317  888828 command_runner.go:130] > [crio.network]
	I0429 13:23:44.207324  888828 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0429 13:23:44.207332  888828 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0429 13:23:44.207336  888828 command_runner.go:130] > # cni_default_network = ""
	I0429 13:23:44.207344  888828 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0429 13:23:44.207350  888828 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0429 13:23:44.207355  888828 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0429 13:23:44.207382  888828 command_runner.go:130] > # plugin_dirs = [
	I0429 13:23:44.207391  888828 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0429 13:23:44.207406  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207416  888828 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0429 13:23:44.207422  888828 command_runner.go:130] > [crio.metrics]
	I0429 13:23:44.207427  888828 command_runner.go:130] > # Globally enable or disable metrics support.
	I0429 13:23:44.207433  888828 command_runner.go:130] > enable_metrics = true
	I0429 13:23:44.207438  888828 command_runner.go:130] > # Specify enabled metrics collectors.
	I0429 13:23:44.207445  888828 command_runner.go:130] > # Per default all metrics are enabled.
	I0429 13:23:44.207455  888828 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0429 13:23:44.207463  888828 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0429 13:23:44.207471  888828 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0429 13:23:44.207476  888828 command_runner.go:130] > # metrics_collectors = [
	I0429 13:23:44.207482  888828 command_runner.go:130] > # 	"operations",
	I0429 13:23:44.207487  888828 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0429 13:23:44.207493  888828 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0429 13:23:44.207498  888828 command_runner.go:130] > # 	"operations_errors",
	I0429 13:23:44.207504  888828 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0429 13:23:44.207509  888828 command_runner.go:130] > # 	"image_pulls_by_name",
	I0429 13:23:44.207515  888828 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0429 13:23:44.207520  888828 command_runner.go:130] > # 	"image_pulls_failures",
	I0429 13:23:44.207527  888828 command_runner.go:130] > # 	"image_pulls_successes",
	I0429 13:23:44.207531  888828 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0429 13:23:44.207538  888828 command_runner.go:130] > # 	"image_layer_reuse",
	I0429 13:23:44.207544  888828 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0429 13:23:44.207551  888828 command_runner.go:130] > # 	"containers_oom_total",
	I0429 13:23:44.207555  888828 command_runner.go:130] > # 	"containers_oom",
	I0429 13:23:44.207562  888828 command_runner.go:130] > # 	"processes_defunct",
	I0429 13:23:44.207566  888828 command_runner.go:130] > # 	"operations_total",
	I0429 13:23:44.207570  888828 command_runner.go:130] > # 	"operations_latency_seconds",
	I0429 13:23:44.207577  888828 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0429 13:23:44.207581  888828 command_runner.go:130] > # 	"operations_errors_total",
	I0429 13:23:44.207586  888828 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0429 13:23:44.207590  888828 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0429 13:23:44.207597  888828 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0429 13:23:44.207601  888828 command_runner.go:130] > # 	"image_pulls_success_total",
	I0429 13:23:44.207608  888828 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0429 13:23:44.207613  888828 command_runner.go:130] > # 	"containers_oom_count_total",
	I0429 13:23:44.207626  888828 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0429 13:23:44.207632  888828 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0429 13:23:44.207636  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207640  888828 command_runner.go:130] > # The port on which the metrics server will listen.
	I0429 13:23:44.207647  888828 command_runner.go:130] > # metrics_port = 9090
	I0429 13:23:44.207652  888828 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0429 13:23:44.207658  888828 command_runner.go:130] > # metrics_socket = ""
	I0429 13:23:44.207663  888828 command_runner.go:130] > # The certificate for the secure metrics server.
	I0429 13:23:44.207671  888828 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0429 13:23:44.207680  888828 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0429 13:23:44.207687  888828 command_runner.go:130] > # certificate on any modification event.
	I0429 13:23:44.207691  888828 command_runner.go:130] > # metrics_cert = ""
	I0429 13:23:44.207698  888828 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0429 13:23:44.207703  888828 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0429 13:23:44.207710  888828 command_runner.go:130] > # metrics_key = ""
	I0429 13:23:44.207715  888828 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0429 13:23:44.207721  888828 command_runner.go:130] > [crio.tracing]
	I0429 13:23:44.207727  888828 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0429 13:23:44.207733  888828 command_runner.go:130] > # enable_tracing = false
	I0429 13:23:44.207738  888828 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0429 13:23:44.207746  888828 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0429 13:23:44.207754  888828 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0429 13:23:44.207760  888828 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0429 13:23:44.207764  888828 command_runner.go:130] > # CRI-O NRI configuration.
	I0429 13:23:44.207770  888828 command_runner.go:130] > [crio.nri]
	I0429 13:23:44.207774  888828 command_runner.go:130] > # Globally enable or disable NRI.
	I0429 13:23:44.207781  888828 command_runner.go:130] > # enable_nri = false
	I0429 13:23:44.207785  888828 command_runner.go:130] > # NRI socket to listen on.
	I0429 13:23:44.207792  888828 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0429 13:23:44.207796  888828 command_runner.go:130] > # NRI plugin directory to use.
	I0429 13:23:44.207803  888828 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0429 13:23:44.207808  888828 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0429 13:23:44.207815  888828 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0429 13:23:44.207821  888828 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0429 13:23:44.207834  888828 command_runner.go:130] > # nri_disable_connections = false
	I0429 13:23:44.207842  888828 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0429 13:23:44.207852  888828 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0429 13:23:44.207860  888828 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0429 13:23:44.207864  888828 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0429 13:23:44.207872  888828 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0429 13:23:44.207879  888828 command_runner.go:130] > [crio.stats]
	I0429 13:23:44.207885  888828 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0429 13:23:44.207892  888828 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0429 13:23:44.207899  888828 command_runner.go:130] > # stats_collection_period = 0
	I0429 13:23:44.208067  888828 cni.go:84] Creating CNI manager for ""
	I0429 13:23:44.208083  888828 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:23:44.208096  888828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:23:44.208118  888828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-404116 NodeName:multinode-404116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:23:44.208301  888828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-404116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:23:44.208379  888828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:23:44.220927  888828 command_runner.go:130] > kubeadm
	I0429 13:23:44.220955  888828 command_runner.go:130] > kubectl
	I0429 13:23:44.220960  888828 command_runner.go:130] > kubelet
	I0429 13:23:44.220994  888828 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:23:44.221063  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:23:44.233095  888828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 13:23:44.254378  888828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:23:44.274260  888828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0429 13:23:44.295440  888828 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I0429 13:23:44.300393  888828 command_runner.go:130] > 192.168.39.179	control-plane.minikube.internal
	I0429 13:23:44.300497  888828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:23:44.454621  888828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:23:44.471210  888828 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116 for IP: 192.168.39.179
	I0429 13:23:44.471239  888828 certs.go:194] generating shared ca certs ...
	I0429 13:23:44.471272  888828 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:23:44.471459  888828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:23:44.471498  888828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:23:44.471507  888828 certs.go:256] generating profile certs ...
	I0429 13:23:44.471581  888828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/client.key
	I0429 13:23:44.471656  888828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.key.55dd999f
	I0429 13:23:44.471694  888828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.key
	I0429 13:23:44.471705  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 13:23:44.471716  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 13:23:44.471732  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 13:23:44.471747  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 13:23:44.471758  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 13:23:44.471771  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 13:23:44.471782  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 13:23:44.471793  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 13:23:44.471842  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:23:44.471869  888828 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:23:44.471879  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:23:44.471901  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:23:44.471921  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:23:44.471942  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:23:44.471993  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:23:44.472019  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.472031  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.472043  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.472684  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:23:44.503777  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:23:44.530413  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:23:44.558690  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:23:44.586903  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 13:23:44.616032  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:23:44.645868  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:23:44.674621  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 13:23:44.704591  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:23:44.733658  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:23:44.761642  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:23:44.790133  888828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:23:44.811700  888828 ssh_runner.go:195] Run: openssl version
	I0429 13:23:44.818448  888828 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 13:23:44.818552  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:23:44.831376  888828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.837034  888828 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.837079  888828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.837143  888828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.843756  888828 command_runner.go:130] > 51391683
	I0429 13:23:44.843902  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:23:44.855416  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:23:44.868341  888828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.874245  888828 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.874291  888828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.874342  888828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.881295  888828 command_runner.go:130] > 3ec20f2e
	I0429 13:23:44.881396  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:23:44.892567  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:23:44.905077  888828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.910771  888828 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.910830  888828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.910905  888828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.917547  888828 command_runner.go:130] > b5213941
	I0429 13:23:44.917664  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:23:44.928854  888828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:23:44.934443  888828 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:23:44.934476  888828 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 13:23:44.934482  888828 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0429 13:23:44.934492  888828 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:23:44.934502  888828 command_runner.go:130] > Access: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934510  888828 command_runner.go:130] > Modify: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934516  888828 command_runner.go:130] > Change: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934524  888828 command_runner.go:130] >  Birth: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934595  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 13:23:44.941334  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.941495  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 13:23:44.948268  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.948416  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 13:23:44.955108  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.955298  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 13:23:44.962356  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.962513  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 13:23:44.969701  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.969830  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 13:23:44.976571  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.976703  888828 kubeadm.go:391] StartCluster: {Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:23:44.976877  888828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:23:44.976954  888828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:23:45.018450  888828 command_runner.go:130] > 44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649
	I0429 13:23:45.018482  888828 command_runner.go:130] > ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc
	I0429 13:23:45.018501  888828 command_runner.go:130] > e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1
	I0429 13:23:45.018508  888828 command_runner.go:130] > b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0
	I0429 13:23:45.018513  888828 command_runner.go:130] > a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288
	I0429 13:23:45.018519  888828 command_runner.go:130] > 429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09
	I0429 13:23:45.018524  888828 command_runner.go:130] > 80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd
	I0429 13:23:45.018531  888828 command_runner.go:130] > 972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8
	I0429 13:23:45.020241  888828 cri.go:89] found id: "44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649"
	I0429 13:23:45.020265  888828 cri.go:89] found id: "ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc"
	I0429 13:23:45.020268  888828 cri.go:89] found id: "e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1"
	I0429 13:23:45.020271  888828 cri.go:89] found id: "b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0"
	I0429 13:23:45.020274  888828 cri.go:89] found id: "a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288"
	I0429 13:23:45.020279  888828 cri.go:89] found id: "429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09"
	I0429 13:23:45.020282  888828 cri.go:89] found id: "80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd"
	I0429 13:23:45.020284  888828 cri.go:89] found id: "972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8"
	I0429 13:23:45.020287  888828 cri.go:89] found id: ""
	I0429 13:23:45.020339  888828 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.213927274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397111213898012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5831a6b-6b3c-403c-bf3e-380599a5d23e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.214540840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d22e263-201c-42e7-850c-1e428214371a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.214623800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d22e263-201c-42e7-850c-1e428214371a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.215025508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d22e263-201c-42e7-850c-1e428214371a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.267760031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49e41f9d-dd7d-4db8-bfe1-3e7fc68cd753 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.267863777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49e41f9d-dd7d-4db8-bfe1-3e7fc68cd753 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.269107874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f08d4c0a-faa9-49f8-88be-38c418d6dfb5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.269760903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397111269730313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f08d4c0a-faa9-49f8-88be-38c418d6dfb5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.270640434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=703d140a-b5d0-4901-9579-1e61d6f07549 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.270727868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=703d140a-b5d0-4901-9579-1e61d6f07549 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.271242712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=703d140a-b5d0-4901-9579-1e61d6f07549 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.317736190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7f97989-0a11-41f9-9205-a8a5022a87c3 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.317878970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7f97989-0a11-41f9-9205-a8a5022a87c3 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.319309966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7aad9724-d752-4558-8890-239cc5dc57e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.319763810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397111319740390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aad9724-d752-4558-8890-239cc5dc57e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.320424609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87626b34-6afe-4c96-b22d-d45f8744c937 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.320495027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87626b34-6afe-4c96-b22d-d45f8744c937 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.320850588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87626b34-6afe-4c96-b22d-d45f8744c937 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.376497182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=966a5ac8-d5fc-4de2-912d-81b797854972 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.376604333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=966a5ac8-d5fc-4de2-912d-81b797854972 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.378025666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a75d800a-7486-4a7c-bccb-d1d44bb4772b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.378555297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397111378527577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a75d800a-7486-4a7c-bccb-d1d44bb4772b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.379047902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d009f73-3876-42e3-a77c-886ce365f207 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.379106734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d009f73-3876-42e3-a77c-886ce365f207 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:25:11 multinode-404116 crio[2837]: time="2024-04-29 13:25:11.379557253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d009f73-3876-42e3-a77c-886ce365f207 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f958c0312feae       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   29f53cb24b75d       busybox-fc5497c4f-qv47r
	d3d662af5e7ed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   8454194fb8485       coredns-7db6d8ff4d-mmfbk
	303bc49134b18       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   65ae91209d13b       kindnet-f8fr7
	c65216a06dd7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   974d2f9aea03e       storage-provisioner
	e7f592c1524d6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   0928b94fea1b5       kube-proxy-rz7lc
	82d8672ffe502       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   c49a3d526293b       kube-scheduler-multinode-404116
	d6c6430f33020       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   1310d71b84ed2       etcd-multinode-404116
	9db65e6fb1cd0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   1c1099fbf7e5a       kube-controller-manager-multinode-404116
	9bc639c8a4d93       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   2a043ae3264a5       kube-apiserver-multinode-404116
	73a7bc8db9dd1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   9b2f791d41c72       busybox-fc5497c4f-qv47r
	44ee48270c02d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   76b2c74a9b1b8       coredns-7db6d8ff4d-mmfbk
	ae888ddcc9cdb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   e77c634d7b435       storage-provisioner
	e8798b622aa8f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   1d908aac8eb9b       kube-proxy-rz7lc
	b0e4f3651130b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   0f11f8fd000fc       kindnet-f8fr7
	a1fd6f8fc5902       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   d6b1c1702d9cc       etcd-multinode-404116
	429fa04058735       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   01572ba0feef0       kube-scheduler-multinode-404116
	80662b05a48fd       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   0022be900755a       kube-apiserver-multinode-404116
	972052fbdfae7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   b28561774589c       kube-controller-manager-multinode-404116
	
	
	==> coredns [44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649] <==
	[INFO] 10.244.0.3:44073 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002033086s
	[INFO] 10.244.0.3:39857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101505s
	[INFO] 10.244.0.3:37799 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051577s
	[INFO] 10.244.0.3:47790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001609677s
	[INFO] 10.244.0.3:43843 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061147s
	[INFO] 10.244.0.3:38165 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094063s
	[INFO] 10.244.0.3:59070 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041907s
	[INFO] 10.244.1.2:47443 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161709s
	[INFO] 10.244.1.2:52611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013738s
	[INFO] 10.244.1.2:41816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078439s
	[INFO] 10.244.1.2:46590 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159774s
	[INFO] 10.244.0.3:49983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098713s
	[INFO] 10.244.0.3:50156 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235118s
	[INFO] 10.244.0.3:48129 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066664s
	[INFO] 10.244.0.3:60037 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084661s
	[INFO] 10.244.1.2:38798 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172291s
	[INFO] 10.244.1.2:43137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286744s
	[INFO] 10.244.1.2:41187 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141695s
	[INFO] 10.244.1.2:53196 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153859s
	[INFO] 10.244.0.3:47734 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113019s
	[INFO] 10.244.0.3:43346 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094125s
	[INFO] 10.244.0.3:41390 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079452s
	[INFO] 10.244.0.3:58818 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46565 - 46945 "HINFO IN 6282464484539575923.7220632988662159516. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035039689s
	
	
	==> describe nodes <==
	Name:               multinode-404116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-404116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=multinode-404116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T13_17_37_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-404116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:25:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    multinode-404116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f8c851a6b7a4aa0bd6b1654a3273021
	  System UUID:                9f8c851a-6b7a-4aa0-bd6b-1654a3273021
	  Boot ID:                    63737963-1177-4fbc-9a7a-2c3628aec3ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qv47r                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 coredns-7db6d8ff4d-mmfbk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 etcd-multinode-404116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kindnet-f8fr7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m21s
	  kube-system                 kube-apiserver-multinode-404116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-controller-manager-multinode-404116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-rz7lc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-scheduler-multinode-404116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m19s                  kube-proxy       
	  Normal  Starting                 78s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m41s (x8 over 7m41s)  kubelet          Node multinode-404116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m41s)  kubelet          Node multinode-404116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x7 over 7m41s)  kubelet          Node multinode-404116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m35s                  kubelet          Node multinode-404116 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m35s                  kubelet          Node multinode-404116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s                  kubelet          Node multinode-404116 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m22s                  node-controller  Node multinode-404116 event: Registered Node multinode-404116 in Controller
	  Normal  NodeReady                7m18s                  kubelet          Node multinode-404116 status is now: NodeReady
	  Normal  Starting                 85s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)      kubelet          Node multinode-404116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)      kubelet          Node multinode-404116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)      kubelet          Node multinode-404116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                    node-controller  Node multinode-404116 event: Registered Node multinode-404116 in Controller
	
	
	Name:               multinode-404116-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-404116-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=multinode-404116
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T13_24_31_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:24:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-404116-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:25:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:24:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:24:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:24:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:24:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    multinode-404116-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66ebf1df42804f89a7563922d8e28417
	  System UUID:                66ebf1df-4280-4f89-a756-3922d8e28417
	  Boot ID:                    e350f6b9-8643-4840-ac6e-3ca0f96041b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s4jxr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-gg2jn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m44s
	  kube-system                 kube-proxy-w7rmz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m39s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m45s (x2 over 6m45s)  kubelet     Node multinode-404116-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s (x2 over 6m45s)  kubelet     Node multinode-404116-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x2 over 6m45s)  kubelet     Node multinode-404116-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m36s                  kubelet     Node multinode-404116-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-404116-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-404116-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-404116-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-404116-m02 status is now: NodeReady
	
	
	Name:               multinode-404116-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-404116-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=multinode-404116
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T13_25_00_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:24:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-404116-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:25:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:25:08 +0000   Mon, 29 Apr 2024 13:24:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:25:08 +0000   Mon, 29 Apr 2024 13:24:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:25:08 +0000   Mon, 29 Apr 2024 13:24:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:25:08 +0000   Mon, 29 Apr 2024 13:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-404116-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a01d442c53eb4df09270b1ab07801ce8
	  System UUID:                a01d442c-53eb-4df0-9270-b1ab07801ce8
	  Boot ID:                    2e24a147-e03f-4ab1-8bd5-f41e0a1d7791
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pzf28       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-5fn5l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m52s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m11s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet     Node multinode-404116-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet     Node multinode-404116-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet     Node multinode-404116-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m47s                  kubelet     Node multinode-404116-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m15s (x2 over 5m16s)  kubelet     Node multinode-404116-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m15s (x2 over 5m16s)  kubelet     Node multinode-404116-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m15s (x2 over 5m16s)  kubelet     Node multinode-404116-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m7s                   kubelet     Node multinode-404116-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-404116-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-404116-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-404116-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-404116-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.064195] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075164] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.178778] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.169832] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.317109] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.733022] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.069948] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.539470] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.519477] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.062117] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.083406] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.197283] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.109147] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.166042] kauditd_printk_skb: 80 callbacks suppressed
	[Apr29 13:23] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[  +0.159669] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.190183] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.151119] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.319070] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.824509] systemd-fstab-generator[2921]: Ignoring "noauto" option for root device
	[  +2.071430] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +5.737680] kauditd_printk_skb: 184 callbacks suppressed
	[Apr29 13:24] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.879817] systemd-fstab-generator[3856]: Ignoring "noauto" option for root device
	[ +21.328173] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288] <==
	{"level":"info","ts":"2024-04-29T13:17:31.770864Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:17:31.77588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T13:17:31.789318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:17:31.789422Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:18:27.026878Z","caller":"traceutil/trace.go:171","msg":"trace[790673457] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"111.157391ms","start":"2024-04-29T13:18:26.915561Z","end":"2024-04-29T13:18:27.026718Z","steps":["trace[790673457] 'process raft request'  (duration: 111.110794ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:18:27.027137Z","caller":"traceutil/trace.go:171","msg":"trace[90457516] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"178.809661ms","start":"2024-04-29T13:18:26.848318Z","end":"2024-04-29T13:18:27.027128Z","steps":["trace[90457516] 'process raft request'  (duration: 172.426634ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:18:27.027308Z","caller":"traceutil/trace.go:171","msg":"trace[1666134087] linearizableReadLoop","detail":"{readStateIndex:504; appliedIndex:503; }","duration":"173.460642ms","start":"2024-04-29T13:18:26.85384Z","end":"2024-04-29T13:18:27.027301Z","steps":["trace[1666134087] 'read index received'  (duration: 166.916513ms)","trace[1666134087] 'applied index is now lower than readState.Index'  (duration: 6.543343ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T13:18:27.0275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.562284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T13:18:27.027573Z","caller":"traceutil/trace.go:171","msg":"trace[547154128] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"173.750794ms","start":"2024-04-29T13:18:26.853816Z","end":"2024-04-29T13:18:27.027567Z","steps":["trace[547154128] 'agreement among raft nodes before linearized reading'  (duration: 173.536013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T13:19:13.78466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.452568ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10801196691884090663 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-404116-m03.17cac2c398e7b428\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-404116-m03.17cac2c398e7b428\" value_size:640 lease:1577824655029314563 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T13:19:13.785054Z","caller":"traceutil/trace.go:171","msg":"trace[742885260] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"209.551363ms","start":"2024-04-29T13:19:13.575488Z","end":"2024-04-29T13:19:13.78504Z","steps":["trace[742885260] 'process raft request'  (duration: 209.491473ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:19:13.785055Z","caller":"traceutil/trace.go:171","msg":"trace[2102936144] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"241.892328ms","start":"2024-04-29T13:19:13.54308Z","end":"2024-04-29T13:19:13.784972Z","steps":["trace[2102936144] 'process raft request'  (duration: 118.302847ms)","trace[2102936144] 'compare'  (duration: 122.322474ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T13:19:13.785094Z","caller":"traceutil/trace.go:171","msg":"trace[389221968] linearizableReadLoop","detail":"{readStateIndex:645; appliedIndex:644; }","duration":"240.502652ms","start":"2024-04-29T13:19:13.544585Z","end":"2024-04-29T13:19:13.785088Z","steps":["trace[389221968] 'read index received'  (duration: 116.737185ms)","trace[389221968] 'applied index is now lower than readState.Index'  (duration: 123.764318ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T13:19:13.785184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.580546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-404116-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-29T13:19:13.785531Z","caller":"traceutil/trace.go:171","msg":"trace[697336960] range","detail":"{range_begin:/registry/minions/multinode-404116-m03; range_end:; response_count:1; response_revision:612; }","duration":"240.960604ms","start":"2024-04-29T13:19:13.544555Z","end":"2024-04-29T13:19:13.785516Z","steps":["trace[697336960] 'agreement among raft nodes before linearized reading'  (duration: 240.548002ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:22:11.369101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T13:22:11.369422Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-404116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.179:2380"],"advertise-client-urls":["https://192.168.39.179:2379"]}
	{"level":"warn","ts":"2024-04-29T13:22:11.369596Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T13:22:11.369729Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T13:22:11.466969Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.179:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T13:22:11.467127Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.179:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T13:22:11.468619Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9edf382f8ea095e5","current-leader-member-id":"9edf382f8ea095e5"}
	{"level":"info","ts":"2024-04-29T13:22:11.472251Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:22:11.472418Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:22:11.472446Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-404116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.179:2380"],"advertise-client-urls":["https://192.168.39.179:2379"]}
	
	
	==> etcd [d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966] <==
	{"level":"info","ts":"2024-04-29T13:23:48.220702Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:23:48.220808Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:23:48.22117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 switched to configuration voters=(11447930554706597349)"}
	{"level":"info","ts":"2024-04-29T13:23:48.22446Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b3e38a398ac243f2","local-member-id":"9edf382f8ea095e5","added-peer-id":"9edf382f8ea095e5","added-peer-peer-urls":["https://192.168.39.179:2380"]}
	{"level":"info","ts":"2024-04-29T13:23:48.224762Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b3e38a398ac243f2","local-member-id":"9edf382f8ea095e5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:23:48.224859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:23:48.243987Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:23:48.244272Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:23:48.244304Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:23:48.244567Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9edf382f8ea095e5","initial-advertise-peer-urls":["https://192.168.39.179:2380"],"listen-peer-urls":["https://192.168.39.179:2380"],"advertise-client-urls":["https://192.168.39.179:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.179:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:23:48.24461Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:23:49.599373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T13:23:49.599464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T13:23:49.599502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 received MsgPreVoteResp from 9edf382f8ea095e5 at term 2"}
	{"level":"info","ts":"2024-04-29T13:23:49.599514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.599519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 received MsgVoteResp from 9edf382f8ea095e5 at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.599544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.599555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9edf382f8ea095e5 elected leader 9edf382f8ea095e5 at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.608488Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9edf382f8ea095e5","local-member-attributes":"{Name:multinode-404116 ClientURLs:[https://192.168.39.179:2379]}","request-path":"/0/members/9edf382f8ea095e5/attributes","cluster-id":"b3e38a398ac243f2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:23:49.608528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:23:49.608696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:23:49.609174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:23:49.609294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:23:49.611122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.179:2379"}
	{"level":"info","ts":"2024-04-29T13:23:49.611399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:25:11 up 8 min,  0 users,  load average: 0.49, 0.32, 0.16
	Linux multinode-404116 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945] <==
	I0429 13:24:23.471526       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:24:33.484544       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:24:33.484609       1 main.go:227] handling current node
	I0429 13:24:33.484629       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:24:33.484636       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:24:33.484766       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:24:33.484791       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:24:43.489466       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:24:43.489512       1 main.go:227] handling current node
	I0429 13:24:43.489523       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:24:43.489528       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:24:43.489641       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:24:43.489650       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:24:53.535826       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:24:53.536048       1 main.go:227] handling current node
	I0429 13:24:53.536113       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:24:53.536152       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:24:53.536416       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:24:53.536451       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:25:03.542862       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:25:03.543120       1 main.go:227] handling current node
	I0429 13:25:03.543159       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:25:03.543181       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:25:03.543366       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:25:03.543393       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0] <==
	I0429 13:21:23.162092       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:21:33.173719       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:21:33.173928       1 main.go:227] handling current node
	I0429 13:21:33.173977       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:21:33.173998       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:21:33.174169       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:21:33.174190       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:21:43.188675       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:21:43.188824       1 main.go:227] handling current node
	I0429 13:21:43.188857       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:21:43.188878       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:21:43.189051       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:21:43.189075       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:21:53.194099       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:21:53.194143       1 main.go:227] handling current node
	I0429 13:21:53.194154       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:21:53.194159       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:21:53.194315       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:21:53.194339       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:22:03.204391       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:22:03.204433       1 main.go:227] handling current node
	I0429 13:22:03.205042       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:22:03.205110       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:22:03.205502       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:22:03.205556       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd] <==
	E0429 13:22:11.404933       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.405005       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.405108       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.405176       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.405472       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.406461       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.406667       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.407334       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.407715       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.407786       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.407828       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408069       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.408176       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.407555       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408390       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408590       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408949       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.410550       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.411409       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.407848       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.411519       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.411582       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.411720       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.411857       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.411936       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6] <==
	I0429 13:23:51.218666       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 13:23:51.218835       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 13:23:51.218905       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 13:23:51.219753       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 13:23:51.219409       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 13:23:51.219478       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 13:23:51.219964       1 aggregator.go:165] initial CRD sync complete...
	I0429 13:23:51.219971       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 13:23:51.219977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 13:23:51.219983       1 cache.go:39] Caches are synced for autoregister controller
	I0429 13:23:51.220268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 13:23:51.228702       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 13:23:51.229444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 13:23:51.241676       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 13:23:51.241738       1 policy_source.go:224] refreshing policies
	I0429 13:23:51.241844       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 13:23:51.283129       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 13:23:52.021853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 13:23:53.890523       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 13:23:54.035088       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 13:23:54.065922       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 13:23:54.182824       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 13:23:54.205827       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 13:24:03.531146       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 13:24:03.640124       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8] <==
	I0429 13:18:27.030974       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m02\" does not exist"
	I0429 13:18:27.098993       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m02" podCIDRs=["10.244.1.0/24"]
	I0429 13:18:29.442804       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-404116-m02"
	I0429 13:18:35.661927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:18:37.885960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.416925ms"
	I0429 13:18:37.908619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.451743ms"
	I0429 13:18:37.929064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.387778ms"
	I0429 13:18:37.929173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.197µs"
	I0429 13:18:40.000043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.642481ms"
	I0429 13:18:40.000292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.34µs"
	I0429 13:18:40.137140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.60634ms"
	I0429 13:18:40.138257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.953µs"
	I0429 13:19:13.790054       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:13.792319       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m03\" does not exist"
	I0429 13:19:13.803596       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m03" podCIDRs=["10.244.2.0/24"]
	I0429 13:19:14.459486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-404116-m03"
	I0429 13:19:24.128069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:54.805379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:56.091643       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m03\" does not exist"
	I0429 13:19:56.091744       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:56.103149       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m03" podCIDRs=["10.244.3.0/24"]
	I0429 13:20:04.502601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:20:49.513440       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m03"
	I0429 13:20:49.582517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.956745ms"
	I0429 13:20:49.582754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.959µs"
	
	
	==> kube-controller-manager [9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40] <==
	I0429 13:24:04.169714       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:24:04.214362       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:24:04.214554       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 13:24:26.406030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.722151ms"
	I0429 13:24:26.421496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.400104ms"
	I0429 13:24:26.422092       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.854µs"
	I0429 13:24:30.840387       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m02\" does not exist"
	I0429 13:24:30.856846       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m02" podCIDRs=["10.244.1.0/24"]
	I0429 13:24:32.728829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="944.96µs"
	I0429 13:24:32.776837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.625µs"
	I0429 13:24:32.791927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.239µs"
	I0429 13:24:32.821130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.546µs"
	I0429 13:24:32.832291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.56µs"
	I0429 13:24:32.843083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.977µs"
	I0429 13:24:34.620474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.92µs"
	I0429 13:24:39.186454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:24:39.225431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="489.82µs"
	I0429 13:24:39.240601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.692µs"
	I0429 13:24:41.224874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.16068ms"
	I0429 13:24:41.226082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.806µs"
	I0429 13:24:58.778782       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:24:59.920925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m03\" does not exist"
	I0429 13:24:59.924089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:24:59.935631       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m03" podCIDRs=["10.244.2.0/24"]
	I0429 13:25:08.344554       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	
	
	==> kube-proxy [e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b] <==
	I0429 13:23:52.662333       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:23:52.755097       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.179"]
	I0429 13:23:52.879702       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:23:52.879805       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:23:52.879837       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:23:52.885399       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:23:52.885623       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:23:52.885658       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:23:52.887824       1 config.go:192] "Starting service config controller"
	I0429 13:23:52.887869       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:23:52.887906       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:23:52.887910       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:23:52.889604       1 config.go:319] "Starting node config controller"
	I0429 13:23:52.889631       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:23:52.988832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:23:52.988927       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:23:52.990354       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1] <==
	I0429 13:17:52.408262       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:17:52.423860       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.179"]
	I0429 13:17:52.473813       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:17:52.473908       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:17:52.473938       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:17:52.476973       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:17:52.477383       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:17:52.477601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:17:52.479811       1 config.go:192] "Starting service config controller"
	I0429 13:17:52.479869       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:17:52.479927       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:17:52.479945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:17:52.480658       1 config.go:319] "Starting node config controller"
	I0429 13:17:52.480725       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:17:52.580162       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:17:52.580392       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:17:52.580858       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09] <==
	E0429 13:17:34.966682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 13:17:34.979284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 13:17:34.979339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 13:17:34.981860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 13:17:34.981908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 13:17:35.137686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 13:17:35.137745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 13:17:35.166098       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 13:17:35.166266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 13:17:35.184991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 13:17:35.185099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 13:17:35.202385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 13:17:35.202436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 13:17:35.295459       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 13:17:35.295509       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:17:35.408487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 13:17:35.408594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 13:17:35.437613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 13:17:35.438174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 13:17:35.453388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 13:17:35.453503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0429 13:17:38.413022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 13:22:11.364886       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 13:22:11.365489       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0429 13:22:11.365587       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676] <==
	I0429 13:23:48.878298       1 serving.go:380] Generated self-signed cert in-memory
	W0429 13:23:51.060592       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 13:23:51.060691       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:23:51.060704       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 13:23:51.060710       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 13:23:51.192361       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 13:23:51.192478       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:23:51.196967       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 13:23:51.197006       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 13:23:51.199581       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 13:23:51.199707       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 13:23:51.297862       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:23:48 multinode-404116 kubelet[3055]: I0429 13:23:48.180755    3055 kubelet_node_status.go:73] "Attempting to register node" node="multinode-404116"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.267741    3055 kubelet_node_status.go:112] "Node was previously registered" node="multinode-404116"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.268162    3055 kubelet_node_status.go:76] "Successfully registered node" node="multinode-404116"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.270172    3055 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.271609    3055 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: E0429 13:23:51.565951    3055 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-404116\" already exists" pod="kube-system/kube-controller-manager-multinode-404116"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.633122    3055 apiserver.go:52] "Watching apiserver"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.637174    3055 topology_manager.go:215] "Topology Admit Handler" podUID="e6e94d18-d1f4-41db-8c32-a324a4023f94" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mmfbk"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.637403    3055 topology_manager.go:215] "Topology Admit Handler" podUID="729af88d-aabc-412e-a4ba-e6fde2391fe5" podNamespace="kube-system" podName="kube-proxy-rz7lc"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.638020    3055 topology_manager.go:215] "Topology Admit Handler" podUID="fc25a9d4-3dae-4000-a303-6afc0ef95463" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.638433    3055 topology_manager.go:215] "Topology Admit Handler" podUID="93b1fe59-3774-4270-84bc-d3028250d27e" podNamespace="kube-system" podName="kindnet-f8fr7"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.638613    3055 topology_manager.go:215] "Topology Admit Handler" podUID="749d96cc-d7ac-4204-8508-554f13dd2f79" podNamespace="default" podName="busybox-fc5497c4f-qv47r"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.661027    3055 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679393    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/729af88d-aabc-412e-a4ba-e6fde2391fe5-xtables-lock\") pod \"kube-proxy-rz7lc\" (UID: \"729af88d-aabc-412e-a4ba-e6fde2391fe5\") " pod="kube-system/kube-proxy-rz7lc"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679427    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc25a9d4-3dae-4000-a303-6afc0ef95463-tmp\") pod \"storage-provisioner\" (UID: \"fc25a9d4-3dae-4000-a303-6afc0ef95463\") " pod="kube-system/storage-provisioner"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679501    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b1fe59-3774-4270-84bc-d3028250d27e-lib-modules\") pod \"kindnet-f8fr7\" (UID: \"93b1fe59-3774-4270-84bc-d3028250d27e\") " pod="kube-system/kindnet-f8fr7"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679532    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/729af88d-aabc-412e-a4ba-e6fde2391fe5-lib-modules\") pod \"kube-proxy-rz7lc\" (UID: \"729af88d-aabc-412e-a4ba-e6fde2391fe5\") " pod="kube-system/kube-proxy-rz7lc"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679561    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b1fe59-3774-4270-84bc-d3028250d27e-xtables-lock\") pod \"kindnet-f8fr7\" (UID: \"93b1fe59-3774-4270-84bc-d3028250d27e\") " pod="kube-system/kindnet-f8fr7"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679576    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93b1fe59-3774-4270-84bc-d3028250d27e-cni-cfg\") pod \"kindnet-f8fr7\" (UID: \"93b1fe59-3774-4270-84bc-d3028250d27e\") " pod="kube-system/kindnet-f8fr7"
	Apr 29 13:23:58 multinode-404116 kubelet[3055]: I0429 13:23:58.624800    3055 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 29 13:24:46 multinode-404116 kubelet[3055]: E0429 13:24:46.743126    3055 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:25:10.881150  889868 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18773-847310/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-404116 -n multinode-404116
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-404116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (305.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 stop
E0429 13:26:19.253685  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404116 stop: exit status 82 (2m0.546511002s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-404116-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-404116 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404116 status: exit status 3 (18.705626403s)

                                                
                                                
-- stdout --
	multinode-404116
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-404116-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:27:34.799804  890541 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 13:27:34.799845  890541 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-404116 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-404116 -n multinode-404116
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-404116 logs -n 25: (1.712175075s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116:/home/docker/cp-test_multinode-404116-m02_multinode-404116.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116 sudo cat                                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m02_multinode-404116.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03:/home/docker/cp-test_multinode-404116-m02_multinode-404116-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116-m03 sudo cat                                   | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m02_multinode-404116-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp testdata/cp-test.txt                                                | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile403422532/001/cp-test_multinode-404116-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116:/home/docker/cp-test_multinode-404116-m03_multinode-404116.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116 sudo cat                                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m03_multinode-404116.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt                       | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m02:/home/docker/cp-test_multinode-404116-m03_multinode-404116-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n                                                                 | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | multinode-404116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-404116 ssh -n multinode-404116-m02 sudo cat                                   | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	|         | /home/docker/cp-test_multinode-404116-m03_multinode-404116-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-404116 node stop m03                                                          | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:19 UTC |
	| node    | multinode-404116 node start                                                             | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:19 UTC | 29 Apr 24 13:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-404116                                                                | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:20 UTC |                     |
	| stop    | -p multinode-404116                                                                     | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:20 UTC |                     |
	| start   | -p multinode-404116                                                                     | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:22 UTC | 29 Apr 24 13:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-404116                                                                | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:25 UTC |                     |
	| node    | multinode-404116 node delete                                                            | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:25 UTC | 29 Apr 24 13:25 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-404116 stop                                                                   | multinode-404116 | jenkins | v1.33.0 | 29 Apr 24 13:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 13:22:10
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 13:22:10.380725  888828 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:22:10.381066  888828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:22:10.381079  888828 out.go:304] Setting ErrFile to fd 2...
	I0429 13:22:10.381084  888828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:22:10.381326  888828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:22:10.382004  888828 out.go:298] Setting JSON to false
	I0429 13:22:10.383190  888828 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":79475,"bootTime":1714317455,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:22:10.383278  888828 start.go:139] virtualization: kvm guest
	I0429 13:22:10.386372  888828 out.go:177] * [multinode-404116] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:22:10.388157  888828 notify.go:220] Checking for updates...
	I0429 13:22:10.388189  888828 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:22:10.389891  888828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:22:10.391812  888828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:22:10.393406  888828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:22:10.395092  888828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:22:10.396771  888828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:22:10.398911  888828 config.go:182] Loaded profile config "multinode-404116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:22:10.399065  888828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:22:10.399599  888828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:22:10.399695  888828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:22:10.416696  888828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0429 13:22:10.417273  888828 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:22:10.417860  888828 main.go:141] libmachine: Using API Version  1
	I0429 13:22:10.417884  888828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:22:10.418354  888828 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:22:10.418666  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:22:10.459291  888828 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 13:22:10.460654  888828 start.go:297] selected driver: kvm2
	I0429 13:22:10.460680  888828 start.go:901] validating driver "kvm2" against &{Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:22:10.460853  888828 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:22:10.461250  888828 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:22:10.461336  888828 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 13:22:10.478115  888828 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 13:22:10.478857  888828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:22:10.478922  888828 cni.go:84] Creating CNI manager for ""
	I0429 13:22:10.478935  888828 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:22:10.478999  888828 start.go:340] cluster config:
	{Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-404116 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:22:10.479143  888828 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:22:10.481532  888828 out.go:177] * Starting "multinode-404116" primary control-plane node in "multinode-404116" cluster
	I0429 13:22:10.482872  888828 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:22:10.482930  888828 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 13:22:10.482944  888828 cache.go:56] Caching tarball of preloaded images
	I0429 13:22:10.483046  888828 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 13:22:10.483062  888828 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 13:22:10.483201  888828 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/config.json ...
	I0429 13:22:10.483454  888828 start.go:360] acquireMachinesLock for multinode-404116: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:22:10.483509  888828 start.go:364] duration metric: took 30.373µs to acquireMachinesLock for "multinode-404116"
	I0429 13:22:10.483530  888828 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:22:10.483539  888828 fix.go:54] fixHost starting: 
	I0429 13:22:10.483860  888828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:22:10.483908  888828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:22:10.499777  888828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0429 13:22:10.500287  888828 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:22:10.500874  888828 main.go:141] libmachine: Using API Version  1
	I0429 13:22:10.500903  888828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:22:10.501266  888828 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:22:10.501459  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:22:10.501591  888828 main.go:141] libmachine: (multinode-404116) Calling .GetState
	I0429 13:22:10.503527  888828 fix.go:112] recreateIfNeeded on multinode-404116: state=Running err=<nil>
	W0429 13:22:10.503567  888828 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:22:10.506029  888828 out.go:177] * Updating the running kvm2 "multinode-404116" VM ...
	I0429 13:22:10.507484  888828 machine.go:94] provisionDockerMachine start ...
	I0429 13:22:10.507519  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:22:10.507855  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.511413  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.512028  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.512081  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.512229  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:10.512484  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.512659  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.512863  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:10.513050  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:10.513327  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:10.513349  888828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 13:22:10.633684  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-404116
	
	I0429 13:22:10.633737  888828 main.go:141] libmachine: (multinode-404116) Calling .GetMachineName
	I0429 13:22:10.634098  888828 buildroot.go:166] provisioning hostname "multinode-404116"
	I0429 13:22:10.634136  888828 main.go:141] libmachine: (multinode-404116) Calling .GetMachineName
	I0429 13:22:10.634330  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.637950  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.638523  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.638574  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.638858  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:10.639140  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.639410  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.639590  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:10.639810  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:10.640068  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:10.640089  888828 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-404116 && echo "multinode-404116" | sudo tee /etc/hostname
	I0429 13:22:10.781571  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-404116
	
	I0429 13:22:10.781607  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.784810  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.785338  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.785377  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.785645  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:10.785903  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.786091  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:10.786249  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:10.786414  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:10.786618  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:10.786642  888828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-404116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-404116/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-404116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:22:10.901158  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:22:10.901199  888828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:22:10.901272  888828 buildroot.go:174] setting up certificates
	I0429 13:22:10.901290  888828 provision.go:84] configureAuth start
	I0429 13:22:10.901308  888828 main.go:141] libmachine: (multinode-404116) Calling .GetMachineName
	I0429 13:22:10.901618  888828 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:22:10.904659  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.905121  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.905149  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.905327  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:10.907898  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.908288  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:10.908318  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:10.908486  888828 provision.go:143] copyHostCerts
	I0429 13:22:10.908560  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:22:10.908607  888828 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:22:10.908621  888828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:22:10.908721  888828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:22:10.908857  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:22:10.908900  888828 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:22:10.908910  888828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:22:10.908956  888828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:22:10.909024  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:22:10.909050  888828 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:22:10.909060  888828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:22:10.909155  888828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:22:10.909244  888828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.multinode-404116 san=[127.0.0.1 192.168.39.179 localhost minikube multinode-404116]
	I0429 13:22:11.047075  888828 provision.go:177] copyRemoteCerts
	I0429 13:22:11.047144  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:22:11.047172  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:11.050212  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.050581  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:11.050610  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.050798  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:11.051010  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:11.051284  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:11.051494  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:22:11.140649  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 13:22:11.140749  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:22:11.170840  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 13:22:11.170928  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 13:22:11.200391  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 13:22:11.200479  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:22:11.228660  888828 provision.go:87] duration metric: took 327.355422ms to configureAuth
	I0429 13:22:11.228693  888828 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:22:11.228932  888828 config.go:182] Loaded profile config "multinode-404116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:22:11.229014  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:22:11.231687  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.232124  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:22:11.232158  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:22:11.232364  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:22:11.232588  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:11.232771  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:22:11.232977  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:22:11.233194  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:22:11.233416  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:22:11.233433  888828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:23:42.022158  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:23:42.022205  888828 machine.go:97] duration metric: took 1m31.514702644s to provisionDockerMachine
	I0429 13:23:42.022221  888828 start.go:293] postStartSetup for "multinode-404116" (driver="kvm2")
	I0429 13:23:42.022241  888828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:23:42.022281  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.022640  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:23:42.022681  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.026455  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.026998  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.027036  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.027225  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.027517  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.027747  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.028029  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:23:42.121476  888828 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:23:42.126207  888828 command_runner.go:130] > NAME=Buildroot
	I0429 13:23:42.126238  888828 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 13:23:42.126245  888828 command_runner.go:130] > ID=buildroot
	I0429 13:23:42.126252  888828 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 13:23:42.126259  888828 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 13:23:42.126312  888828 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:23:42.126331  888828 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:23:42.126438  888828 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:23:42.126557  888828 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:23:42.126575  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /etc/ssl/certs/8546602.pem
	I0429 13:23:42.126719  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:23:42.137939  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:23:42.165261  888828 start.go:296] duration metric: took 143.022946ms for postStartSetup
	I0429 13:23:42.165344  888828 fix.go:56] duration metric: took 1m31.681804102s for fixHost
	I0429 13:23:42.165373  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.168466  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.168925  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.168969  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.169204  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.169462  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.169622  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.169792  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.170000  888828 main.go:141] libmachine: Using SSH client type: native
	I0429 13:23:42.170185  888828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0429 13:23:42.170196  888828 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:23:42.285106  888828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714397022.261077798
	
	I0429 13:23:42.285138  888828 fix.go:216] guest clock: 1714397022.261077798
	I0429 13:23:42.285146  888828 fix.go:229] Guest: 2024-04-29 13:23:42.261077798 +0000 UTC Remote: 2024-04-29 13:23:42.165351568 +0000 UTC m=+91.843247629 (delta=95.72623ms)
	I0429 13:23:42.285191  888828 fix.go:200] guest clock delta is within tolerance: 95.72623ms
	I0429 13:23:42.285199  888828 start.go:83] releasing machines lock for "multinode-404116", held for 1m31.801677231s
	I0429 13:23:42.285228  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.285612  888828 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:23:42.289313  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.289813  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.289845  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.290111  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.290920  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.291205  888828 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:23:42.291333  888828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:23:42.291387  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.291541  888828 ssh_runner.go:195] Run: cat /version.json
	I0429 13:23:42.291575  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:23:42.295256  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.295289  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.295826  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.295873  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:42.295902  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.295921  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:42.296090  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.296110  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:23:42.296357  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.296357  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:23:42.296545  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.296548  888828 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:23:42.296832  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:23:42.296952  888828 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:23:42.402890  888828 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 13:23:42.402953  888828 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 13:23:42.403144  888828 ssh_runner.go:195] Run: systemctl --version
	I0429 13:23:42.409843  888828 command_runner.go:130] > systemd 252 (252)
	I0429 13:23:42.409890  888828 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 13:23:42.409993  888828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:23:42.580166  888828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 13:23:42.587032  888828 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 13:23:42.587106  888828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:23:42.587173  888828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:23:42.598571  888828 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 13:23:42.598615  888828 start.go:494] detecting cgroup driver to use...
	I0429 13:23:42.598704  888828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:23:42.617547  888828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:23:42.633809  888828 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:23:42.633898  888828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:23:42.650294  888828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:23:42.666463  888828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:23:42.820781  888828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:23:42.970140  888828 docker.go:233] disabling docker service ...
	I0429 13:23:42.970228  888828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:23:42.989092  888828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:23:43.005789  888828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:23:43.159499  888828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:23:43.312155  888828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:23:43.328715  888828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:23:43.350301  888828 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0429 13:23:43.350354  888828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:23:43.350403  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.363505  888828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:23:43.363595  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.376955  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.389807  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.403459  888828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:23:43.416706  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.429463  888828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.441678  888828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:23:43.454378  888828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:23:43.465805  888828 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 13:23:43.465920  888828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:23:43.478300  888828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:23:43.629388  888828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:23:43.903950  888828 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:23:43.904050  888828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:23:43.909278  888828 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0429 13:23:43.909317  888828 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 13:23:43.909327  888828 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0429 13:23:43.909335  888828 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:23:43.909340  888828 command_runner.go:130] > Access: 2024-04-29 13:23:43.750171900 +0000
	I0429 13:23:43.909347  888828 command_runner.go:130] > Modify: 2024-04-29 13:23:43.750171900 +0000
	I0429 13:23:43.909352  888828 command_runner.go:130] > Change: 2024-04-29 13:23:43.750171900 +0000
	I0429 13:23:43.909356  888828 command_runner.go:130] >  Birth: -
	I0429 13:23:43.909404  888828 start.go:562] Will wait 60s for crictl version
	I0429 13:23:43.909460  888828 ssh_runner.go:195] Run: which crictl
	I0429 13:23:43.919768  888828 command_runner.go:130] > /usr/bin/crictl
	I0429 13:23:43.919903  888828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:23:43.966574  888828 command_runner.go:130] > Version:  0.1.0
	I0429 13:23:43.966609  888828 command_runner.go:130] > RuntimeName:  cri-o
	I0429 13:23:43.966614  888828 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0429 13:23:43.966620  888828 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 13:23:43.966646  888828 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:23:43.966726  888828 ssh_runner.go:195] Run: crio --version
	I0429 13:23:44.001479  888828 command_runner.go:130] > crio version 1.29.1
	I0429 13:23:44.001513  888828 command_runner.go:130] > Version:        1.29.1
	I0429 13:23:44.001522  888828 command_runner.go:130] > GitCommit:      unknown
	I0429 13:23:44.001530  888828 command_runner.go:130] > GitCommitDate:  unknown
	I0429 13:23:44.001536  888828 command_runner.go:130] > GitTreeState:   clean
	I0429 13:23:44.001544  888828 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 13:23:44.001551  888828 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 13:23:44.001558  888828 command_runner.go:130] > Compiler:       gc
	I0429 13:23:44.001565  888828 command_runner.go:130] > Platform:       linux/amd64
	I0429 13:23:44.001570  888828 command_runner.go:130] > Linkmode:       dynamic
	I0429 13:23:44.001576  888828 command_runner.go:130] > BuildTags:      
	I0429 13:23:44.001589  888828 command_runner.go:130] >   containers_image_ostree_stub
	I0429 13:23:44.001603  888828 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 13:23:44.001613  888828 command_runner.go:130] >   btrfs_noversion
	I0429 13:23:44.001621  888828 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 13:23:44.001629  888828 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 13:23:44.001637  888828 command_runner.go:130] >   seccomp
	I0429 13:23:44.001645  888828 command_runner.go:130] > LDFlags:          unknown
	I0429 13:23:44.001656  888828 command_runner.go:130] > SeccompEnabled:   true
	I0429 13:23:44.001664  888828 command_runner.go:130] > AppArmorEnabled:  false
	I0429 13:23:44.003129  888828 ssh_runner.go:195] Run: crio --version
	I0429 13:23:44.036535  888828 command_runner.go:130] > crio version 1.29.1
	I0429 13:23:44.036561  888828 command_runner.go:130] > Version:        1.29.1
	I0429 13:23:44.036566  888828 command_runner.go:130] > GitCommit:      unknown
	I0429 13:23:44.036571  888828 command_runner.go:130] > GitCommitDate:  unknown
	I0429 13:23:44.036581  888828 command_runner.go:130] > GitTreeState:   clean
	I0429 13:23:44.036587  888828 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 13:23:44.036591  888828 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 13:23:44.036595  888828 command_runner.go:130] > Compiler:       gc
	I0429 13:23:44.036599  888828 command_runner.go:130] > Platform:       linux/amd64
	I0429 13:23:44.036604  888828 command_runner.go:130] > Linkmode:       dynamic
	I0429 13:23:44.036609  888828 command_runner.go:130] > BuildTags:      
	I0429 13:23:44.036613  888828 command_runner.go:130] >   containers_image_ostree_stub
	I0429 13:23:44.036617  888828 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 13:23:44.036621  888828 command_runner.go:130] >   btrfs_noversion
	I0429 13:23:44.036625  888828 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 13:23:44.036632  888828 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 13:23:44.036635  888828 command_runner.go:130] >   seccomp
	I0429 13:23:44.036639  888828 command_runner.go:130] > LDFlags:          unknown
	I0429 13:23:44.036643  888828 command_runner.go:130] > SeccompEnabled:   true
	I0429 13:23:44.036647  888828 command_runner.go:130] > AppArmorEnabled:  false
	I0429 13:23:44.038966  888828 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:23:44.040428  888828 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:23:44.043662  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:44.044090  888828 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:23:44.044130  888828 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:23:44.044360  888828 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 13:23:44.049246  888828 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0429 13:23:44.049421  888828 kubeadm.go:877] updating cluster {Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:23:44.049578  888828 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:23:44.049654  888828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:23:44.098284  888828 command_runner.go:130] > {
	I0429 13:23:44.098311  888828 command_runner.go:130] >   "images": [
	I0429 13:23:44.098315  888828 command_runner.go:130] >     {
	I0429 13:23:44.098323  888828 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 13:23:44.098329  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098334  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 13:23:44.098339  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098343  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098367  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 13:23:44.098378  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 13:23:44.098386  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098392  888828 command_runner.go:130] >       "size": "65291810",
	I0429 13:23:44.098398  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098408  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098420  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098430  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098434  888828 command_runner.go:130] >     },
	I0429 13:23:44.098438  888828 command_runner.go:130] >     {
	I0429 13:23:44.098446  888828 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 13:23:44.098450  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098458  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 13:23:44.098462  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098467  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098475  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 13:23:44.098482  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 13:23:44.098491  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098497  888828 command_runner.go:130] >       "size": "1363676",
	I0429 13:23:44.098507  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098520  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098530  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098536  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098543  888828 command_runner.go:130] >     },
	I0429 13:23:44.098546  888828 command_runner.go:130] >     {
	I0429 13:23:44.098553  888828 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 13:23:44.098559  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098564  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 13:23:44.098570  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098574  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098581  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 13:23:44.098598  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 13:23:44.098611  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098622  888828 command_runner.go:130] >       "size": "31470524",
	I0429 13:23:44.098631  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098641  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098655  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098662  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098665  888828 command_runner.go:130] >     },
	I0429 13:23:44.098669  888828 command_runner.go:130] >     {
	I0429 13:23:44.098677  888828 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 13:23:44.098684  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098691  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 13:23:44.098701  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098710  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098725  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 13:23:44.098750  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 13:23:44.098758  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098766  888828 command_runner.go:130] >       "size": "61245718",
	I0429 13:23:44.098770  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.098777  888828 command_runner.go:130] >       "username": "nonroot",
	I0429 13:23:44.098782  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098791  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098800  888828 command_runner.go:130] >     },
	I0429 13:23:44.098807  888828 command_runner.go:130] >     {
	I0429 13:23:44.098821  888828 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 13:23:44.098831  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.098842  888828 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 13:23:44.098851  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098860  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.098873  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 13:23:44.098884  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 13:23:44.098893  888828 command_runner.go:130] >       ],
	I0429 13:23:44.098902  888828 command_runner.go:130] >       "size": "150779692",
	I0429 13:23:44.098910  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.098921  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.098930  888828 command_runner.go:130] >       },
	I0429 13:23:44.098939  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.098948  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.098962  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.098970  888828 command_runner.go:130] >     },
	I0429 13:23:44.098977  888828 command_runner.go:130] >     {
	I0429 13:23:44.098985  888828 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 13:23:44.098994  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099006  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 13:23:44.099015  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099022  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099038  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 13:23:44.099053  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 13:23:44.099061  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099069  888828 command_runner.go:130] >       "size": "117609952",
	I0429 13:23:44.099073  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099082  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.099091  888828 command_runner.go:130] >       },
	I0429 13:23:44.099099  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099108  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099118  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099126  888828 command_runner.go:130] >     },
	I0429 13:23:44.099133  888828 command_runner.go:130] >     {
	I0429 13:23:44.099143  888828 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 13:23:44.099153  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099162  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 13:23:44.099167  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099175  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099189  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 13:23:44.099205  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 13:23:44.099212  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099220  888828 command_runner.go:130] >       "size": "112170310",
	I0429 13:23:44.099229  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099235  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.099243  888828 command_runner.go:130] >       },
	I0429 13:23:44.099250  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099260  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099267  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099274  888828 command_runner.go:130] >     },
	I0429 13:23:44.099277  888828 command_runner.go:130] >     {
	I0429 13:23:44.099290  888828 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 13:23:44.099299  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099310  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 13:23:44.099319  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099330  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099376  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 13:23:44.099393  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 13:23:44.099403  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099411  888828 command_runner.go:130] >       "size": "85932953",
	I0429 13:23:44.099420  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.099429  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099441  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099448  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099454  888828 command_runner.go:130] >     },
	I0429 13:23:44.099458  888828 command_runner.go:130] >     {
	I0429 13:23:44.099464  888828 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 13:23:44.099470  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099479  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 13:23:44.099485  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099491  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099503  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 13:23:44.099514  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 13:23:44.099519  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099526  888828 command_runner.go:130] >       "size": "63026502",
	I0429 13:23:44.099531  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099537  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.099542  888828 command_runner.go:130] >       },
	I0429 13:23:44.099546  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099550  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099555  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.099561  888828 command_runner.go:130] >     },
	I0429 13:23:44.099566  888828 command_runner.go:130] >     {
	I0429 13:23:44.099576  888828 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 13:23:44.099583  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.099590  888828 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 13:23:44.099596  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099602  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.099617  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 13:23:44.099635  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 13:23:44.099643  888828 command_runner.go:130] >       ],
	I0429 13:23:44.099654  888828 command_runner.go:130] >       "size": "750414",
	I0429 13:23:44.099663  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.099670  888828 command_runner.go:130] >         "value": "65535"
	I0429 13:23:44.099679  888828 command_runner.go:130] >       },
	I0429 13:23:44.099685  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.099696  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.099706  888828 command_runner.go:130] >       "pinned": true
	I0429 13:23:44.099715  888828 command_runner.go:130] >     }
	I0429 13:23:44.099723  888828 command_runner.go:130] >   ]
	I0429 13:23:44.099733  888828 command_runner.go:130] > }
	I0429 13:23:44.099986  888828 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:23:44.100002  888828 crio.go:433] Images already preloaded, skipping extraction
	I0429 13:23:44.100076  888828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:23:44.138064  888828 command_runner.go:130] > {
	I0429 13:23:44.138096  888828 command_runner.go:130] >   "images": [
	I0429 13:23:44.138101  888828 command_runner.go:130] >     {
	I0429 13:23:44.138135  888828 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 13:23:44.138142  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138150  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 13:23:44.138154  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138160  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138172  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 13:23:44.138187  888828 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 13:23:44.138194  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138202  888828 command_runner.go:130] >       "size": "65291810",
	I0429 13:23:44.138212  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138220  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138234  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138242  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138249  888828 command_runner.go:130] >     },
	I0429 13:23:44.138255  888828 command_runner.go:130] >     {
	I0429 13:23:44.138273  888828 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 13:23:44.138283  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138292  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 13:23:44.138301  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138309  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138326  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 13:23:44.138342  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 13:23:44.138350  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138358  888828 command_runner.go:130] >       "size": "1363676",
	I0429 13:23:44.138368  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138389  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138399  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138408  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138416  888828 command_runner.go:130] >     },
	I0429 13:23:44.138423  888828 command_runner.go:130] >     {
	I0429 13:23:44.138435  888828 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 13:23:44.138444  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138455  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 13:23:44.138468  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138479  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138493  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 13:23:44.138510  888828 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 13:23:44.138519  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138527  888828 command_runner.go:130] >       "size": "31470524",
	I0429 13:23:44.138536  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138544  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138553  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138562  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138571  888828 command_runner.go:130] >     },
	I0429 13:23:44.138577  888828 command_runner.go:130] >     {
	I0429 13:23:44.138591  888828 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 13:23:44.138604  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138615  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 13:23:44.138624  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138632  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138649  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 13:23:44.138701  888828 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 13:23:44.138716  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138722  888828 command_runner.go:130] >       "size": "61245718",
	I0429 13:23:44.138729  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.138736  888828 command_runner.go:130] >       "username": "nonroot",
	I0429 13:23:44.138746  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138754  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138763  888828 command_runner.go:130] >     },
	I0429 13:23:44.138770  888828 command_runner.go:130] >     {
	I0429 13:23:44.138785  888828 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 13:23:44.138795  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138806  888828 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 13:23:44.138814  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138821  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.138837  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 13:23:44.138851  888828 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 13:23:44.138860  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138868  888828 command_runner.go:130] >       "size": "150779692",
	I0429 13:23:44.138878  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.138886  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.138895  888828 command_runner.go:130] >       },
	I0429 13:23:44.138902  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.138911  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.138918  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.138927  888828 command_runner.go:130] >     },
	I0429 13:23:44.138934  888828 command_runner.go:130] >     {
	I0429 13:23:44.138948  888828 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 13:23:44.138960  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.138972  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 13:23:44.138982  888828 command_runner.go:130] >       ],
	I0429 13:23:44.138991  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139018  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 13:23:44.139034  888828 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 13:23:44.139043  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139050  888828 command_runner.go:130] >       "size": "117609952",
	I0429 13:23:44.139059  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139067  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.139074  888828 command_runner.go:130] >       },
	I0429 13:23:44.139082  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139092  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139102  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139108  888828 command_runner.go:130] >     },
	I0429 13:23:44.139117  888828 command_runner.go:130] >     {
	I0429 13:23:44.139129  888828 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 13:23:44.139138  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139148  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 13:23:44.139156  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139164  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139180  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 13:23:44.139196  888828 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 13:23:44.139205  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139213  888828 command_runner.go:130] >       "size": "112170310",
	I0429 13:23:44.139223  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139230  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.139239  888828 command_runner.go:130] >       },
	I0429 13:23:44.139246  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139256  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139263  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139270  888828 command_runner.go:130] >     },
	I0429 13:23:44.139278  888828 command_runner.go:130] >     {
	I0429 13:23:44.139289  888828 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 13:23:44.139299  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139310  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 13:23:44.139321  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139331  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139354  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 13:23:44.139384  888828 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 13:23:44.139393  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139403  888828 command_runner.go:130] >       "size": "85932953",
	I0429 13:23:44.139411  888828 command_runner.go:130] >       "uid": null,
	I0429 13:23:44.139419  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139429  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139438  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139446  888828 command_runner.go:130] >     },
	I0429 13:23:44.139452  888828 command_runner.go:130] >     {
	I0429 13:23:44.139464  888828 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 13:23:44.139472  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139481  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 13:23:44.139490  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139498  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139514  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 13:23:44.139530  888828 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 13:23:44.139539  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139547  888828 command_runner.go:130] >       "size": "63026502",
	I0429 13:23:44.139555  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139563  888828 command_runner.go:130] >         "value": "0"
	I0429 13:23:44.139572  888828 command_runner.go:130] >       },
	I0429 13:23:44.139580  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139589  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139596  888828 command_runner.go:130] >       "pinned": false
	I0429 13:23:44.139604  888828 command_runner.go:130] >     },
	I0429 13:23:44.139610  888828 command_runner.go:130] >     {
	I0429 13:23:44.139624  888828 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 13:23:44.139633  888828 command_runner.go:130] >       "repoTags": [
	I0429 13:23:44.139641  888828 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 13:23:44.139651  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139658  888828 command_runner.go:130] >       "repoDigests": [
	I0429 13:23:44.139672  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 13:23:44.139688  888828 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 13:23:44.139704  888828 command_runner.go:130] >       ],
	I0429 13:23:44.139715  888828 command_runner.go:130] >       "size": "750414",
	I0429 13:23:44.139723  888828 command_runner.go:130] >       "uid": {
	I0429 13:23:44.139733  888828 command_runner.go:130] >         "value": "65535"
	I0429 13:23:44.139742  888828 command_runner.go:130] >       },
	I0429 13:23:44.139751  888828 command_runner.go:130] >       "username": "",
	I0429 13:23:44.139762  888828 command_runner.go:130] >       "spec": null,
	I0429 13:23:44.139771  888828 command_runner.go:130] >       "pinned": true
	I0429 13:23:44.139778  888828 command_runner.go:130] >     }
	I0429 13:23:44.139787  888828 command_runner.go:130] >   ]
	I0429 13:23:44.139793  888828 command_runner.go:130] > }
	I0429 13:23:44.139940  888828 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:23:44.139956  888828 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:23:44.139989  888828 kubeadm.go:928] updating node { 192.168.39.179 8443 v1.30.0 crio true true} ...
	I0429 13:23:44.140150  888828 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-404116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:23:44.140252  888828 ssh_runner.go:195] Run: crio config
	I0429 13:23:44.181854  888828 command_runner.go:130] ! time="2024-04-29 13:23:44.158094108Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0429 13:23:44.188904  888828 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0429 13:23:44.203601  888828 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0429 13:23:44.203629  888828 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0429 13:23:44.203635  888828 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0429 13:23:44.203639  888828 command_runner.go:130] > #
	I0429 13:23:44.203645  888828 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0429 13:23:44.203651  888828 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0429 13:23:44.203657  888828 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0429 13:23:44.203664  888828 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0429 13:23:44.203667  888828 command_runner.go:130] > # reload'.
	I0429 13:23:44.203673  888828 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0429 13:23:44.203679  888828 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0429 13:23:44.203685  888828 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0429 13:23:44.203693  888828 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0429 13:23:44.203698  888828 command_runner.go:130] > [crio]
	I0429 13:23:44.203703  888828 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0429 13:23:44.203710  888828 command_runner.go:130] > # containers images, in this directory.
	I0429 13:23:44.203715  888828 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0429 13:23:44.203725  888828 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0429 13:23:44.203736  888828 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0429 13:23:44.203746  888828 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0429 13:23:44.203750  888828 command_runner.go:130] > # imagestore = ""
	I0429 13:23:44.203757  888828 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0429 13:23:44.203766  888828 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0429 13:23:44.203770  888828 command_runner.go:130] > storage_driver = "overlay"
	I0429 13:23:44.203778  888828 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0429 13:23:44.203787  888828 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0429 13:23:44.203792  888828 command_runner.go:130] > storage_option = [
	I0429 13:23:44.203796  888828 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0429 13:23:44.203802  888828 command_runner.go:130] > ]
	I0429 13:23:44.203809  888828 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0429 13:23:44.203816  888828 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0429 13:23:44.203823  888828 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0429 13:23:44.203829  888828 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0429 13:23:44.203837  888828 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0429 13:23:44.203844  888828 command_runner.go:130] > # always happen on a node reboot
	I0429 13:23:44.203849  888828 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0429 13:23:44.203866  888828 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0429 13:23:44.203874  888828 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0429 13:23:44.203880  888828 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0429 13:23:44.203887  888828 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0429 13:23:44.203895  888828 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0429 13:23:44.203905  888828 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0429 13:23:44.203911  888828 command_runner.go:130] > # internal_wipe = true
	I0429 13:23:44.203919  888828 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0429 13:23:44.203926  888828 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0429 13:23:44.203930  888828 command_runner.go:130] > # internal_repair = false
	I0429 13:23:44.203938  888828 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0429 13:23:44.203955  888828 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0429 13:23:44.203963  888828 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0429 13:23:44.203970  888828 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0429 13:23:44.203976  888828 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0429 13:23:44.203982  888828 command_runner.go:130] > [crio.api]
	I0429 13:23:44.203994  888828 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0429 13:23:44.204001  888828 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0429 13:23:44.204011  888828 command_runner.go:130] > # IP address on which the stream server will listen.
	I0429 13:23:44.204018  888828 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0429 13:23:44.204025  888828 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0429 13:23:44.204033  888828 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0429 13:23:44.204040  888828 command_runner.go:130] > # stream_port = "0"
	I0429 13:23:44.204045  888828 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0429 13:23:44.204052  888828 command_runner.go:130] > # stream_enable_tls = false
	I0429 13:23:44.204058  888828 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0429 13:23:44.204064  888828 command_runner.go:130] > # stream_idle_timeout = ""
	I0429 13:23:44.204071  888828 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0429 13:23:44.204079  888828 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0429 13:23:44.204085  888828 command_runner.go:130] > # minutes.
	I0429 13:23:44.204089  888828 command_runner.go:130] > # stream_tls_cert = ""
	I0429 13:23:44.204097  888828 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0429 13:23:44.204105  888828 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0429 13:23:44.204111  888828 command_runner.go:130] > # stream_tls_key = ""
	I0429 13:23:44.204117  888828 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0429 13:23:44.204125  888828 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0429 13:23:44.204148  888828 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0429 13:23:44.204154  888828 command_runner.go:130] > # stream_tls_ca = ""
	I0429 13:23:44.204162  888828 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 13:23:44.204169  888828 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0429 13:23:44.204176  888828 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 13:23:44.204183  888828 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0429 13:23:44.204188  888828 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0429 13:23:44.204196  888828 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0429 13:23:44.204202  888828 command_runner.go:130] > [crio.runtime]
	I0429 13:23:44.204208  888828 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0429 13:23:44.204216  888828 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0429 13:23:44.204221  888828 command_runner.go:130] > # "nofile=1024:2048"
	I0429 13:23:44.204227  888828 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0429 13:23:44.204233  888828 command_runner.go:130] > # default_ulimits = [
	I0429 13:23:44.204236  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204244  888828 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0429 13:23:44.204248  888828 command_runner.go:130] > # no_pivot = false
	I0429 13:23:44.204256  888828 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0429 13:23:44.204268  888828 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0429 13:23:44.204275  888828 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0429 13:23:44.204280  888828 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0429 13:23:44.204287  888828 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0429 13:23:44.204294  888828 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 13:23:44.204300  888828 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0429 13:23:44.204304  888828 command_runner.go:130] > # Cgroup setting for conmon
	I0429 13:23:44.204313  888828 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0429 13:23:44.204319  888828 command_runner.go:130] > conmon_cgroup = "pod"
	I0429 13:23:44.204325  888828 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0429 13:23:44.204332  888828 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0429 13:23:44.204339  888828 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 13:23:44.204345  888828 command_runner.go:130] > conmon_env = [
	I0429 13:23:44.204351  888828 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 13:23:44.204356  888828 command_runner.go:130] > ]
	I0429 13:23:44.204361  888828 command_runner.go:130] > # Additional environment variables to set for all the
	I0429 13:23:44.204368  888828 command_runner.go:130] > # containers. These are overridden if set in the
	I0429 13:23:44.204374  888828 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0429 13:23:44.204380  888828 command_runner.go:130] > # default_env = [
	I0429 13:23:44.204383  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204391  888828 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0429 13:23:44.204400  888828 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0429 13:23:44.204406  888828 command_runner.go:130] > # selinux = false
	I0429 13:23:44.204412  888828 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0429 13:23:44.204421  888828 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0429 13:23:44.204427  888828 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0429 13:23:44.204433  888828 command_runner.go:130] > # seccomp_profile = ""
	I0429 13:23:44.204438  888828 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0429 13:23:44.204446  888828 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0429 13:23:44.204455  888828 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0429 13:23:44.204461  888828 command_runner.go:130] > # which might increase security.
	I0429 13:23:44.204466  888828 command_runner.go:130] > # This option is currently deprecated,
	I0429 13:23:44.204474  888828 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0429 13:23:44.204481  888828 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0429 13:23:44.204487  888828 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0429 13:23:44.204495  888828 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0429 13:23:44.204507  888828 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0429 13:23:44.204516  888828 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0429 13:23:44.204522  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.204528  888828 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0429 13:23:44.204534  888828 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0429 13:23:44.204540  888828 command_runner.go:130] > # the cgroup blockio controller.
	I0429 13:23:44.204545  888828 command_runner.go:130] > # blockio_config_file = ""
	I0429 13:23:44.204553  888828 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0429 13:23:44.204559  888828 command_runner.go:130] > # blockio parameters.
	I0429 13:23:44.204563  888828 command_runner.go:130] > # blockio_reload = false
	I0429 13:23:44.204572  888828 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0429 13:23:44.204578  888828 command_runner.go:130] > # irqbalance daemon.
	I0429 13:23:44.204583  888828 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0429 13:23:44.204591  888828 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0429 13:23:44.204600  888828 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0429 13:23:44.204610  888828 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0429 13:23:44.204617  888828 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0429 13:23:44.204626  888828 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0429 13:23:44.204633  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.204637  888828 command_runner.go:130] > # rdt_config_file = ""
	I0429 13:23:44.204645  888828 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0429 13:23:44.204650  888828 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0429 13:23:44.204681  888828 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0429 13:23:44.204688  888828 command_runner.go:130] > # separate_pull_cgroup = ""
	I0429 13:23:44.204694  888828 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0429 13:23:44.204702  888828 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0429 13:23:44.204706  888828 command_runner.go:130] > # will be added.
	I0429 13:23:44.204711  888828 command_runner.go:130] > # default_capabilities = [
	I0429 13:23:44.204717  888828 command_runner.go:130] > # 	"CHOWN",
	I0429 13:23:44.204721  888828 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0429 13:23:44.204727  888828 command_runner.go:130] > # 	"FSETID",
	I0429 13:23:44.204730  888828 command_runner.go:130] > # 	"FOWNER",
	I0429 13:23:44.204736  888828 command_runner.go:130] > # 	"SETGID",
	I0429 13:23:44.204740  888828 command_runner.go:130] > # 	"SETUID",
	I0429 13:23:44.204746  888828 command_runner.go:130] > # 	"SETPCAP",
	I0429 13:23:44.204750  888828 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0429 13:23:44.204761  888828 command_runner.go:130] > # 	"KILL",
	I0429 13:23:44.204767  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204775  888828 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0429 13:23:44.204783  888828 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0429 13:23:44.204788  888828 command_runner.go:130] > # add_inheritable_capabilities = false
	I0429 13:23:44.204796  888828 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0429 13:23:44.204804  888828 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 13:23:44.204811  888828 command_runner.go:130] > default_sysctls = [
	I0429 13:23:44.204816  888828 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0429 13:23:44.204821  888828 command_runner.go:130] > ]
	I0429 13:23:44.204825  888828 command_runner.go:130] > # List of devices on the host that a
	I0429 13:23:44.204833  888828 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0429 13:23:44.204840  888828 command_runner.go:130] > # allowed_devices = [
	I0429 13:23:44.204843  888828 command_runner.go:130] > # 	"/dev/fuse",
	I0429 13:23:44.204848  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204854  888828 command_runner.go:130] > # List of additional devices. specified as
	I0429 13:23:44.204863  888828 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0429 13:23:44.204871  888828 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0429 13:23:44.204876  888828 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 13:23:44.204882  888828 command_runner.go:130] > # additional_devices = [
	I0429 13:23:44.204886  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204893  888828 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0429 13:23:44.204897  888828 command_runner.go:130] > # cdi_spec_dirs = [
	I0429 13:23:44.204903  888828 command_runner.go:130] > # 	"/etc/cdi",
	I0429 13:23:44.204906  888828 command_runner.go:130] > # 	"/var/run/cdi",
	I0429 13:23:44.204910  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204916  888828 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0429 13:23:44.204924  888828 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0429 13:23:44.204931  888828 command_runner.go:130] > # Defaults to false.
	I0429 13:23:44.204936  888828 command_runner.go:130] > # device_ownership_from_security_context = false
	I0429 13:23:44.204948  888828 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0429 13:23:44.204956  888828 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0429 13:23:44.204962  888828 command_runner.go:130] > # hooks_dir = [
	I0429 13:23:44.204966  888828 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0429 13:23:44.204970  888828 command_runner.go:130] > # ]
	I0429 13:23:44.204976  888828 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0429 13:23:44.204990  888828 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0429 13:23:44.204997  888828 command_runner.go:130] > # its default mounts from the following two files:
	I0429 13:23:44.205003  888828 command_runner.go:130] > #
	I0429 13:23:44.205009  888828 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0429 13:23:44.205017  888828 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0429 13:23:44.205023  888828 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0429 13:23:44.205026  888828 command_runner.go:130] > #
	I0429 13:23:44.205031  888828 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0429 13:23:44.205039  888828 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0429 13:23:44.205047  888828 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0429 13:23:44.205061  888828 command_runner.go:130] > #      only add mounts it finds in this file.
	I0429 13:23:44.205068  888828 command_runner.go:130] > #
	I0429 13:23:44.205074  888828 command_runner.go:130] > # default_mounts_file = ""
	I0429 13:23:44.205084  888828 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0429 13:23:44.205097  888828 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0429 13:23:44.205106  888828 command_runner.go:130] > pids_limit = 1024
	I0429 13:23:44.205117  888828 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0429 13:23:44.205129  888828 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0429 13:23:44.205142  888828 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0429 13:23:44.205158  888828 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0429 13:23:44.205167  888828 command_runner.go:130] > # log_size_max = -1
	I0429 13:23:44.205181  888828 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0429 13:23:44.205200  888828 command_runner.go:130] > # log_to_journald = false
	I0429 13:23:44.205214  888828 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0429 13:23:44.205225  888828 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0429 13:23:44.205236  888828 command_runner.go:130] > # Path to directory for container attach sockets.
	I0429 13:23:44.205248  888828 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0429 13:23:44.205259  888828 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0429 13:23:44.205269  888828 command_runner.go:130] > # bind_mount_prefix = ""
	I0429 13:23:44.205281  888828 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0429 13:23:44.205290  888828 command_runner.go:130] > # read_only = false
	I0429 13:23:44.205303  888828 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0429 13:23:44.205315  888828 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0429 13:23:44.205326  888828 command_runner.go:130] > # live configuration reload.
	I0429 13:23:44.205334  888828 command_runner.go:130] > # log_level = "info"
	I0429 13:23:44.205343  888828 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0429 13:23:44.205359  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.205368  888828 command_runner.go:130] > # log_filter = ""
	I0429 13:23:44.205379  888828 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0429 13:23:44.205395  888828 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0429 13:23:44.205404  888828 command_runner.go:130] > # separated by comma.
	I0429 13:23:44.205418  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205427  888828 command_runner.go:130] > # uid_mappings = ""
	I0429 13:23:44.205438  888828 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0429 13:23:44.205450  888828 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0429 13:23:44.205460  888828 command_runner.go:130] > # separated by comma.
	I0429 13:23:44.205475  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205484  888828 command_runner.go:130] > # gid_mappings = ""
	I0429 13:23:44.205496  888828 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0429 13:23:44.205510  888828 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 13:23:44.205522  888828 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 13:23:44.205537  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205547  888828 command_runner.go:130] > # minimum_mappable_uid = -1
	I0429 13:23:44.205559  888828 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0429 13:23:44.205572  888828 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 13:23:44.205585  888828 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 13:23:44.205599  888828 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 13:23:44.205609  888828 command_runner.go:130] > # minimum_mappable_gid = -1
	I0429 13:23:44.205619  888828 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0429 13:23:44.205631  888828 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0429 13:23:44.205643  888828 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0429 13:23:44.205653  888828 command_runner.go:130] > # ctr_stop_timeout = 30
	I0429 13:23:44.205669  888828 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0429 13:23:44.205682  888828 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0429 13:23:44.205693  888828 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0429 13:23:44.205704  888828 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0429 13:23:44.205710  888828 command_runner.go:130] > drop_infra_ctr = false
	I0429 13:23:44.205722  888828 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0429 13:23:44.205734  888828 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0429 13:23:44.205749  888828 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0429 13:23:44.205758  888828 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0429 13:23:44.205773  888828 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0429 13:23:44.205791  888828 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0429 13:23:44.205803  888828 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0429 13:23:44.205814  888828 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0429 13:23:44.205823  888828 command_runner.go:130] > # shared_cpuset = ""
	I0429 13:23:44.205836  888828 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0429 13:23:44.205847  888828 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0429 13:23:44.205856  888828 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0429 13:23:44.205870  888828 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0429 13:23:44.205884  888828 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0429 13:23:44.205896  888828 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0429 13:23:44.205909  888828 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0429 13:23:44.205919  888828 command_runner.go:130] > # enable_criu_support = false
	I0429 13:23:44.205930  888828 command_runner.go:130] > # Enable/disable the generation of the container,
	I0429 13:23:44.205947  888828 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0429 13:23:44.205956  888828 command_runner.go:130] > # enable_pod_events = false
	I0429 13:23:44.205970  888828 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 13:23:44.205983  888828 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 13:23:44.205995  888828 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0429 13:23:44.206004  888828 command_runner.go:130] > # default_runtime = "runc"
	I0429 13:23:44.206014  888828 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0429 13:23:44.206027  888828 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0429 13:23:44.206042  888828 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0429 13:23:44.206050  888828 command_runner.go:130] > # creation as a file is not desired either.
	I0429 13:23:44.206060  888828 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0429 13:23:44.206068  888828 command_runner.go:130] > # the hostname is being managed dynamically.
	I0429 13:23:44.206072  888828 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0429 13:23:44.206078  888828 command_runner.go:130] > # ]
	I0429 13:23:44.206083  888828 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0429 13:23:44.206092  888828 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0429 13:23:44.206100  888828 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0429 13:23:44.206105  888828 command_runner.go:130] > # Each entry in the table should follow the format:
	I0429 13:23:44.206110  888828 command_runner.go:130] > #
	I0429 13:23:44.206115  888828 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0429 13:23:44.206122  888828 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0429 13:23:44.206175  888828 command_runner.go:130] > # runtime_type = "oci"
	I0429 13:23:44.206184  888828 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0429 13:23:44.206194  888828 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0429 13:23:44.206198  888828 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0429 13:23:44.206205  888828 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0429 13:23:44.206209  888828 command_runner.go:130] > # monitor_env = []
	I0429 13:23:44.206216  888828 command_runner.go:130] > # privileged_without_host_devices = false
	I0429 13:23:44.206220  888828 command_runner.go:130] > # allowed_annotations = []
	I0429 13:23:44.206228  888828 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0429 13:23:44.206234  888828 command_runner.go:130] > # Where:
	I0429 13:23:44.206240  888828 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0429 13:23:44.206248  888828 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0429 13:23:44.206256  888828 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0429 13:23:44.206265  888828 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0429 13:23:44.206271  888828 command_runner.go:130] > #   in $PATH.
	I0429 13:23:44.206277  888828 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0429 13:23:44.206284  888828 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0429 13:23:44.206290  888828 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0429 13:23:44.206296  888828 command_runner.go:130] > #   state.
	I0429 13:23:44.206303  888828 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0429 13:23:44.206311  888828 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0429 13:23:44.206319  888828 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0429 13:23:44.206327  888828 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0429 13:23:44.206334  888828 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0429 13:23:44.206343  888828 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0429 13:23:44.206349  888828 command_runner.go:130] > #   The currently recognized values are:
	I0429 13:23:44.206355  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0429 13:23:44.206365  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0429 13:23:44.206373  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0429 13:23:44.206379  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0429 13:23:44.206389  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0429 13:23:44.206397  888828 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0429 13:23:44.206406  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0429 13:23:44.206412  888828 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0429 13:23:44.206420  888828 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0429 13:23:44.206429  888828 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0429 13:23:44.206434  888828 command_runner.go:130] > #   deprecated option "conmon".
	I0429 13:23:44.206442  888828 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0429 13:23:44.206455  888828 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0429 13:23:44.206464  888828 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0429 13:23:44.206469  888828 command_runner.go:130] > #   should be moved to the container's cgroup
	I0429 13:23:44.206477  888828 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0429 13:23:44.206485  888828 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0429 13:23:44.206491  888828 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0429 13:23:44.206498  888828 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0429 13:23:44.206501  888828 command_runner.go:130] > #
	I0429 13:23:44.206506  888828 command_runner.go:130] > # Using the seccomp notifier feature:
	I0429 13:23:44.206511  888828 command_runner.go:130] > #
	I0429 13:23:44.206517  888828 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0429 13:23:44.206526  888828 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0429 13:23:44.206531  888828 command_runner.go:130] > #
	I0429 13:23:44.206537  888828 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0429 13:23:44.206545  888828 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0429 13:23:44.206549  888828 command_runner.go:130] > #
	I0429 13:23:44.206555  888828 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0429 13:23:44.206560  888828 command_runner.go:130] > # feature.
	I0429 13:23:44.206563  888828 command_runner.go:130] > #
	I0429 13:23:44.206571  888828 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0429 13:23:44.206577  888828 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0429 13:23:44.206585  888828 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0429 13:23:44.206593  888828 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0429 13:23:44.206599  888828 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0429 13:23:44.206605  888828 command_runner.go:130] > #
	I0429 13:23:44.206611  888828 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0429 13:23:44.206619  888828 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0429 13:23:44.206623  888828 command_runner.go:130] > #
	I0429 13:23:44.206629  888828 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0429 13:23:44.206637  888828 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0429 13:23:44.206640  888828 command_runner.go:130] > #
	I0429 13:23:44.206645  888828 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0429 13:23:44.206653  888828 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0429 13:23:44.206657  888828 command_runner.go:130] > # limitation.
	I0429 13:23:44.206663  888828 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0429 13:23:44.206670  888828 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0429 13:23:44.206679  888828 command_runner.go:130] > runtime_type = "oci"
	I0429 13:23:44.206685  888828 command_runner.go:130] > runtime_root = "/run/runc"
	I0429 13:23:44.206690  888828 command_runner.go:130] > runtime_config_path = ""
	I0429 13:23:44.206697  888828 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0429 13:23:44.206701  888828 command_runner.go:130] > monitor_cgroup = "pod"
	I0429 13:23:44.206707  888828 command_runner.go:130] > monitor_exec_cgroup = ""
	I0429 13:23:44.206711  888828 command_runner.go:130] > monitor_env = [
	I0429 13:23:44.206718  888828 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 13:23:44.206724  888828 command_runner.go:130] > ]
	I0429 13:23:44.206729  888828 command_runner.go:130] > privileged_without_host_devices = false
	I0429 13:23:44.206737  888828 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0429 13:23:44.206744  888828 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0429 13:23:44.206754  888828 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0429 13:23:44.206763  888828 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0429 13:23:44.206773  888828 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0429 13:23:44.206780  888828 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0429 13:23:44.206792  888828 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0429 13:23:44.206802  888828 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0429 13:23:44.206810  888828 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0429 13:23:44.206818  888828 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0429 13:23:44.206824  888828 command_runner.go:130] > # Example:
	I0429 13:23:44.206828  888828 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0429 13:23:44.206833  888828 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0429 13:23:44.206841  888828 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0429 13:23:44.206846  888828 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0429 13:23:44.206851  888828 command_runner.go:130] > # cpuset = 0
	I0429 13:23:44.206855  888828 command_runner.go:130] > # cpushares = "0-1"
	I0429 13:23:44.206861  888828 command_runner.go:130] > # Where:
	I0429 13:23:44.206866  888828 command_runner.go:130] > # The workload name is workload-type.
	I0429 13:23:44.206875  888828 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0429 13:23:44.206882  888828 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0429 13:23:44.206888  888828 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0429 13:23:44.206898  888828 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0429 13:23:44.206906  888828 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0429 13:23:44.206914  888828 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0429 13:23:44.206920  888828 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0429 13:23:44.206931  888828 command_runner.go:130] > # Default value is set to true
	I0429 13:23:44.206938  888828 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0429 13:23:44.206950  888828 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0429 13:23:44.206958  888828 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0429 13:23:44.206965  888828 command_runner.go:130] > # Default value is set to 'false'
	I0429 13:23:44.206969  888828 command_runner.go:130] > # disable_hostport_mapping = false
	I0429 13:23:44.206977  888828 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0429 13:23:44.206980  888828 command_runner.go:130] > #
	I0429 13:23:44.206986  888828 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0429 13:23:44.206991  888828 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0429 13:23:44.206997  888828 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0429 13:23:44.207003  888828 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0429 13:23:44.207009  888828 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0429 13:23:44.207013  888828 command_runner.go:130] > [crio.image]
	I0429 13:23:44.207018  888828 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0429 13:23:44.207022  888828 command_runner.go:130] > # default_transport = "docker://"
	I0429 13:23:44.207028  888828 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0429 13:23:44.207034  888828 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0429 13:23:44.207038  888828 command_runner.go:130] > # global_auth_file = ""
	I0429 13:23:44.207043  888828 command_runner.go:130] > # The image used to instantiate infra containers.
	I0429 13:23:44.207052  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.207057  888828 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0429 13:23:44.207063  888828 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0429 13:23:44.207068  888828 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0429 13:23:44.207073  888828 command_runner.go:130] > # This option supports live configuration reload.
	I0429 13:23:44.207077  888828 command_runner.go:130] > # pause_image_auth_file = ""
	I0429 13:23:44.207082  888828 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0429 13:23:44.207088  888828 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0429 13:23:44.207094  888828 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0429 13:23:44.207100  888828 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0429 13:23:44.207103  888828 command_runner.go:130] > # pause_command = "/pause"
	I0429 13:23:44.207110  888828 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0429 13:23:44.207115  888828 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0429 13:23:44.207121  888828 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0429 13:23:44.207129  888828 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0429 13:23:44.207135  888828 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0429 13:23:44.207146  888828 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0429 13:23:44.207150  888828 command_runner.go:130] > # pinned_images = [
	I0429 13:23:44.207153  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207159  888828 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0429 13:23:44.207165  888828 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0429 13:23:44.207172  888828 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0429 13:23:44.207178  888828 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0429 13:23:44.207183  888828 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0429 13:23:44.207190  888828 command_runner.go:130] > # signature_policy = ""
	I0429 13:23:44.207196  888828 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0429 13:23:44.207205  888828 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0429 13:23:44.207214  888828 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0429 13:23:44.207222  888828 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0429 13:23:44.207229  888828 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0429 13:23:44.207236  888828 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0429 13:23:44.207242  888828 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0429 13:23:44.207251  888828 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0429 13:23:44.207257  888828 command_runner.go:130] > # changing them here.
	I0429 13:23:44.207261  888828 command_runner.go:130] > # insecure_registries = [
	I0429 13:23:44.207267  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207273  888828 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0429 13:23:44.207280  888828 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0429 13:23:44.207289  888828 command_runner.go:130] > # image_volumes = "mkdir"
	I0429 13:23:44.207297  888828 command_runner.go:130] > # Temporary directory to use for storing big files
	I0429 13:23:44.207301  888828 command_runner.go:130] > # big_files_temporary_dir = ""
	I0429 13:23:44.207307  888828 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0429 13:23:44.207313  888828 command_runner.go:130] > # CNI plugins.
	I0429 13:23:44.207317  888828 command_runner.go:130] > [crio.network]
	I0429 13:23:44.207324  888828 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0429 13:23:44.207332  888828 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0429 13:23:44.207336  888828 command_runner.go:130] > # cni_default_network = ""
	I0429 13:23:44.207344  888828 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0429 13:23:44.207350  888828 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0429 13:23:44.207355  888828 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0429 13:23:44.207382  888828 command_runner.go:130] > # plugin_dirs = [
	I0429 13:23:44.207391  888828 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0429 13:23:44.207406  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207416  888828 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0429 13:23:44.207422  888828 command_runner.go:130] > [crio.metrics]
	I0429 13:23:44.207427  888828 command_runner.go:130] > # Globally enable or disable metrics support.
	I0429 13:23:44.207433  888828 command_runner.go:130] > enable_metrics = true
	I0429 13:23:44.207438  888828 command_runner.go:130] > # Specify enabled metrics collectors.
	I0429 13:23:44.207445  888828 command_runner.go:130] > # Per default all metrics are enabled.
	I0429 13:23:44.207455  888828 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0429 13:23:44.207463  888828 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0429 13:23:44.207471  888828 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0429 13:23:44.207476  888828 command_runner.go:130] > # metrics_collectors = [
	I0429 13:23:44.207482  888828 command_runner.go:130] > # 	"operations",
	I0429 13:23:44.207487  888828 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0429 13:23:44.207493  888828 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0429 13:23:44.207498  888828 command_runner.go:130] > # 	"operations_errors",
	I0429 13:23:44.207504  888828 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0429 13:23:44.207509  888828 command_runner.go:130] > # 	"image_pulls_by_name",
	I0429 13:23:44.207515  888828 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0429 13:23:44.207520  888828 command_runner.go:130] > # 	"image_pulls_failures",
	I0429 13:23:44.207527  888828 command_runner.go:130] > # 	"image_pulls_successes",
	I0429 13:23:44.207531  888828 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0429 13:23:44.207538  888828 command_runner.go:130] > # 	"image_layer_reuse",
	I0429 13:23:44.207544  888828 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0429 13:23:44.207551  888828 command_runner.go:130] > # 	"containers_oom_total",
	I0429 13:23:44.207555  888828 command_runner.go:130] > # 	"containers_oom",
	I0429 13:23:44.207562  888828 command_runner.go:130] > # 	"processes_defunct",
	I0429 13:23:44.207566  888828 command_runner.go:130] > # 	"operations_total",
	I0429 13:23:44.207570  888828 command_runner.go:130] > # 	"operations_latency_seconds",
	I0429 13:23:44.207577  888828 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0429 13:23:44.207581  888828 command_runner.go:130] > # 	"operations_errors_total",
	I0429 13:23:44.207586  888828 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0429 13:23:44.207590  888828 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0429 13:23:44.207597  888828 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0429 13:23:44.207601  888828 command_runner.go:130] > # 	"image_pulls_success_total",
	I0429 13:23:44.207608  888828 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0429 13:23:44.207613  888828 command_runner.go:130] > # 	"containers_oom_count_total",
	I0429 13:23:44.207626  888828 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0429 13:23:44.207632  888828 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0429 13:23:44.207636  888828 command_runner.go:130] > # ]
	I0429 13:23:44.207640  888828 command_runner.go:130] > # The port on which the metrics server will listen.
	I0429 13:23:44.207647  888828 command_runner.go:130] > # metrics_port = 9090
	I0429 13:23:44.207652  888828 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0429 13:23:44.207658  888828 command_runner.go:130] > # metrics_socket = ""
	I0429 13:23:44.207663  888828 command_runner.go:130] > # The certificate for the secure metrics server.
	I0429 13:23:44.207671  888828 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0429 13:23:44.207680  888828 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0429 13:23:44.207687  888828 command_runner.go:130] > # certificate on any modification event.
	I0429 13:23:44.207691  888828 command_runner.go:130] > # metrics_cert = ""
	I0429 13:23:44.207698  888828 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0429 13:23:44.207703  888828 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0429 13:23:44.207710  888828 command_runner.go:130] > # metrics_key = ""
	I0429 13:23:44.207715  888828 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0429 13:23:44.207721  888828 command_runner.go:130] > [crio.tracing]
	I0429 13:23:44.207727  888828 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0429 13:23:44.207733  888828 command_runner.go:130] > # enable_tracing = false
	I0429 13:23:44.207738  888828 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0429 13:23:44.207746  888828 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0429 13:23:44.207754  888828 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0429 13:23:44.207760  888828 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0429 13:23:44.207764  888828 command_runner.go:130] > # CRI-O NRI configuration.
	I0429 13:23:44.207770  888828 command_runner.go:130] > [crio.nri]
	I0429 13:23:44.207774  888828 command_runner.go:130] > # Globally enable or disable NRI.
	I0429 13:23:44.207781  888828 command_runner.go:130] > # enable_nri = false
	I0429 13:23:44.207785  888828 command_runner.go:130] > # NRI socket to listen on.
	I0429 13:23:44.207792  888828 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0429 13:23:44.207796  888828 command_runner.go:130] > # NRI plugin directory to use.
	I0429 13:23:44.207803  888828 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0429 13:23:44.207808  888828 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0429 13:23:44.207815  888828 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0429 13:23:44.207821  888828 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0429 13:23:44.207834  888828 command_runner.go:130] > # nri_disable_connections = false
	I0429 13:23:44.207842  888828 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0429 13:23:44.207852  888828 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0429 13:23:44.207860  888828 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0429 13:23:44.207864  888828 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0429 13:23:44.207872  888828 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0429 13:23:44.207879  888828 command_runner.go:130] > [crio.stats]
	I0429 13:23:44.207885  888828 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0429 13:23:44.207892  888828 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0429 13:23:44.207899  888828 command_runner.go:130] > # stats_collection_period = 0
	I0429 13:23:44.208067  888828 cni.go:84] Creating CNI manager for ""
	I0429 13:23:44.208083  888828 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:23:44.208096  888828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:23:44.208118  888828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-404116 NodeName:multinode-404116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:23:44.208301  888828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-404116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:23:44.208379  888828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:23:44.220927  888828 command_runner.go:130] > kubeadm
	I0429 13:23:44.220955  888828 command_runner.go:130] > kubectl
	I0429 13:23:44.220960  888828 command_runner.go:130] > kubelet
	I0429 13:23:44.220994  888828 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:23:44.221063  888828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:23:44.233095  888828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 13:23:44.254378  888828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:23:44.274260  888828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0429 13:23:44.295440  888828 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I0429 13:23:44.300393  888828 command_runner.go:130] > 192.168.39.179	control-plane.minikube.internal
	I0429 13:23:44.300497  888828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:23:44.454621  888828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:23:44.471210  888828 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116 for IP: 192.168.39.179
	I0429 13:23:44.471239  888828 certs.go:194] generating shared ca certs ...
	I0429 13:23:44.471272  888828 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:23:44.471459  888828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:23:44.471498  888828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:23:44.471507  888828 certs.go:256] generating profile certs ...
	I0429 13:23:44.471581  888828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/client.key
	I0429 13:23:44.471656  888828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.key.55dd999f
	I0429 13:23:44.471694  888828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.key
	I0429 13:23:44.471705  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 13:23:44.471716  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 13:23:44.471732  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 13:23:44.471747  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 13:23:44.471758  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 13:23:44.471771  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 13:23:44.471782  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 13:23:44.471793  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 13:23:44.471842  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:23:44.471869  888828 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:23:44.471879  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:23:44.471901  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:23:44.471921  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:23:44.471942  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:23:44.471993  888828 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:23:44.472019  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem -> /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.472031  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.472043  888828 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.472684  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:23:44.503777  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:23:44.530413  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:23:44.558690  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:23:44.586903  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 13:23:44.616032  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:23:44.645868  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:23:44.674621  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/multinode-404116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 13:23:44.704591  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:23:44.733658  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:23:44.761642  888828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:23:44.790133  888828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:23:44.811700  888828 ssh_runner.go:195] Run: openssl version
	I0429 13:23:44.818448  888828 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 13:23:44.818552  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:23:44.831376  888828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.837034  888828 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.837079  888828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.837143  888828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:23:44.843756  888828 command_runner.go:130] > 51391683
	I0429 13:23:44.843902  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:23:44.855416  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:23:44.868341  888828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.874245  888828 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.874291  888828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.874342  888828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:23:44.881295  888828 command_runner.go:130] > 3ec20f2e
	I0429 13:23:44.881396  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:23:44.892567  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:23:44.905077  888828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.910771  888828 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.910830  888828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.910905  888828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:23:44.917547  888828 command_runner.go:130] > b5213941
	I0429 13:23:44.917664  888828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:23:44.928854  888828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:23:44.934443  888828 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:23:44.934476  888828 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 13:23:44.934482  888828 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0429 13:23:44.934492  888828 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:23:44.934502  888828 command_runner.go:130] > Access: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934510  888828 command_runner.go:130] > Modify: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934516  888828 command_runner.go:130] > Change: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934524  888828 command_runner.go:130] >  Birth: 2024-04-29 13:17:27.536312926 +0000
	I0429 13:23:44.934595  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 13:23:44.941334  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.941495  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 13:23:44.948268  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.948416  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 13:23:44.955108  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.955298  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 13:23:44.962356  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.962513  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 13:23:44.969701  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.969830  888828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 13:23:44.976571  888828 command_runner.go:130] > Certificate will not expire
	I0429 13:23:44.976703  888828 kubeadm.go:391] StartCluster: {Name:multinode-404116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-404116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:23:44.976877  888828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:23:44.976954  888828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:23:45.018450  888828 command_runner.go:130] > 44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649
	I0429 13:23:45.018482  888828 command_runner.go:130] > ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc
	I0429 13:23:45.018501  888828 command_runner.go:130] > e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1
	I0429 13:23:45.018508  888828 command_runner.go:130] > b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0
	I0429 13:23:45.018513  888828 command_runner.go:130] > a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288
	I0429 13:23:45.018519  888828 command_runner.go:130] > 429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09
	I0429 13:23:45.018524  888828 command_runner.go:130] > 80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd
	I0429 13:23:45.018531  888828 command_runner.go:130] > 972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8
	I0429 13:23:45.020241  888828 cri.go:89] found id: "44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649"
	I0429 13:23:45.020265  888828 cri.go:89] found id: "ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc"
	I0429 13:23:45.020268  888828 cri.go:89] found id: "e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1"
	I0429 13:23:45.020271  888828 cri.go:89] found id: "b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0"
	I0429 13:23:45.020274  888828 cri.go:89] found id: "a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288"
	I0429 13:23:45.020279  888828 cri.go:89] found id: "429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09"
	I0429 13:23:45.020282  888828 cri.go:89] found id: "80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd"
	I0429 13:23:45.020284  888828 cri.go:89] found id: "972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8"
	I0429 13:23:45.020287  888828 cri.go:89] found id: ""
	I0429 13:23:45.020339  888828 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.589805924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397255589771542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe60ae48-f828-441c-bebb-ad63af11f90a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.591104845Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=593bdb0d-ebdd-4997-9d34-74a609efec95 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.591563660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83074558-75db-40fe-a7e0-16f86933265b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.591625503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83074558-75db-40fe-a7e0-16f86933265b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.591823738Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-qv47r,Uid:749d96cc-d7ac-4204-8508-554f13dd2f79,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397065783641335,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:23:51.636801039Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mmfbk,Uid:e6e94d18-d1f4-41db-8c32-a324a4023f94,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1714397032092906939,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:23:51.636788033Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&PodSandboxMetadata{Name:kindnet-f8fr7,Uid:93b1fe59-3774-4270-84bc-d3028250d27e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397031987797528,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-04-29T13:23:51.636803606Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fc25a9d4-3dae-4000-a303-6afc0ef95463,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397031986149477,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-29T13:23:51.636799606Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&PodSandboxMetadata{Name:kube-proxy-rz7lc,Uid:729af88d-aabc-412e-a4ba-e6fde2391fe5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397031970934661,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:23:51.636798193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-404116,Uid:44cba3dcb77ac10b484eca740d179455,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397027145107339,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 44cba3dcb77ac10b484eca740d179455,kubernetes.io/config.seen: 2024-04-29T13:23:46.636428888Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-404116,Uid:95f62ebd32fa6ddc28c071e323045845,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397027143343387,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95f62ebd32fa6ddc28c071e323045845,kubernetes.io/config.seen: 2024-04-29T13:23:46.636427819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-404116,Uid:123c280b9f7e097c7ec5c6fc047b24f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397027140512521,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.179:2379,kubernetes.io/config.hash: 123c280b9f7e097c7ec5c6fc047b24f5,kubernetes.io/config.seen: 2024-04-29T13:23:46.636421212Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-404116,Uid:4d1e40396d84ae68693fe89997290741,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714397027138893436,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.179:8443,kuberne
tes.io/config.hash: 4d1e40396d84ae68693fe89997290741,kubernetes.io/config.seen: 2024-04-29T13:23:46.636426538Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-qv47r,Uid:749d96cc-d7ac-4204-8508-554f13dd2f79,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396718201591045,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:18:37.889802779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fc25a9d4-3dae-4000-a303-6afc0ef95463,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1714396673647270471,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-29T13:17:53.340247269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mmfbk,Uid:e6e94d18-d1f4-41db-8c32-a324a4023f94,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396673639662285,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:17:53.330957550Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&PodSandboxMetadata{Name:kube-proxy-rz7lc,Uid:729af88d-aabc-412e-a4ba-e6fde2391fe5,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396672117182056,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:17:50.309672625Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&PodSandboxMetadata{Name:kindnet-f8fr7,Uid:93b1fe59-3774-4270-84bc-d3028250d27e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396671523191143,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T13:17:50.309835121Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-404116,Uid:95f62ebd32fa6ddc28c071e323045845,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396650869017610,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95f62ebd32fa6ddc28c071e323045845,kubernetes.io/config.seen: 2024-04-29T13:17:30.358075283Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-404116,Uid:44cba3dcb77ac10b484eca740d179455,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396650839763723,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 44cba3dcb77ac10b484eca740d179455,kubernetes.io/config.seen: 2024-04-29T13:17:30.358067193Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-404116,Uid:4d1e40396d84ae68693fe89997290741,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396650838895723,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.179:8443,kubernetes.io/config.hash: 4d1e40396d84ae68693fe89997290741,kubernetes.io/config.seen: 2024-04-29T13:17:30.358074099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&PodSandboxMetadata{Name:etcd-multinode-404116,Uid:123c280b9f7e097c7ec5c6fc047b24f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714396650838279896,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.179:2379,kubernetes.io/config.hash: 123c280b9f7e097c7ec5c6fc047b24f5,kubernetes.io/config.seen: 2024-04-29T13:17:30.358072757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=593bdb0d-ebdd-4997-9d34-74a609efec95 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.592044599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83074558-75db-40fe-a7e0-16f86933265b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.592927806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0b23b46-84b5-46c0-a452-db00e0519588 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.592981376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0b23b46-84b5-46c0-a452-db00e0519588 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.595165256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0b23b46-84b5-46c0-a452-db00e0519588 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.612071825Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=58af0c29-3589-4157-a027-8621673613f2 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.612158356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58af0c29-3589-4157-a027-8621673613f2 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.650345848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd112889-184f-41da-96f2-b9b07679b6d4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.650444914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd112889-184f-41da-96f2-b9b07679b6d4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.653109142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d807b9c3-e4ac-442a-9114-1fde53c02cb4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.654011782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397255653979185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d807b9c3-e4ac-442a-9114-1fde53c02cb4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.654980956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7241b69-f932-4284-8a56-cee30fb9dd5b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.655085930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7241b69-f932-4284-8a56-cee30fb9dd5b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.655646200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7241b69-f932-4284-8a56-cee30fb9dd5b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.710903688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26e9360a-a955-4d32-9108-d2f46981d274 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.710983349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26e9360a-a955-4d32-9108-d2f46981d274 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.712317207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=036d75b3-44bc-4860-a965-e5d48f2f5baf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.712714201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714397255712689356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=036d75b3-44bc-4860-a965-e5d48f2f5baf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.713398834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cbcc74c-e393-4282-ace3-98b45d70521c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.713459004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cbcc74c-e393-4282-ace3-98b45d70521c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:27:35 multinode-404116 crio[2837]: time="2024-04-29 13:27:35.713871176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f958c0312feaee030ef88c3d32bb1596a006c52322d2130f51f5feeabd321a02,PodSandboxId:29f53cb24b75d6efb5eb6a560eb9d5bc09438ecd1eafa59ef1c7f68454a0b418,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714397065951425194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945,PodSandboxId:65ae91209d13b8785a54eac7080500147a20d850c29dbf6446debb4a0e8eb510,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714397032488828690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4,PodSandboxId:8454194fb8485d007245d820d032bd4f75d5f42f67a3a4c8f51c8f2dff45ef86,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714397032560900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65216a06dd7f7d8d95ee8dca72408da640a375947d66dfd82334209a82abf89,PodSandboxId:974d2f9aea03ec006958698c2f8f27b1b70f9103e52421b018f420f1a4d32253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714397032350303097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},An
notations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b,PodSandboxId:0928b94fea1b577d2235ba87f522cbaf4f8586363971f9b35bc39592eb5df803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714397032267141593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.ku
bernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966,PodSandboxId:1310d71b84ed22be335487533ef2b8e05a0d901b13adf486987f1adc08505cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714397027435611309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.container.hash: fb40558c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676,PodSandboxId:c49a3d526293b040ad80f87d1fe15245b824d7e9db7ec0c4041d7d390fa1e44b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714397027443940563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40,PodSandboxId:1c1099fbf7e5a6c9c0a64e892d335f7d05a515412b069e465d1a22bd19d0f9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714397027373931807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6,PodSandboxId:2a043ae3264a566a861a4c75e1d2c3db08b2e87e1139c588f00a7b56ca4aedb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714397027337019611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash: 698106f1,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bc8db9dd1e592b2563c81a129f90a7393e9a1dcdb1ed1633de3ad33a40f2,PodSandboxId:9b2f791d41c72d6fc168ab734462e3bf4c1626edba31c8da7a65a6e6f9ba93c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714396719171138943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qv47r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749d96cc-d7ac-4204-8508-554f13dd2f79,},Annotations:map[string]string{io.kubernetes.container.hash: 43388e1b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649,PodSandboxId:76b2c74a9b1b8fc2cc6488db55a2249194b8fcf4c05bfc490700fa1efd86d522,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714396673848655413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmfbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e94d18-d1f4-41db-8c32-a324a4023f94,},Annotations:map[string]string{io.kubernetes.container.hash: 836dc72e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae888ddcc9cdb07af6ea9d71f309348e79005106642230575eca9cfbd51b66bc,PodSandboxId:e77c634d7b43540849661bd87de3671368e1dfadae8f90a86c514eb21d83d824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714396673796580932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: fc25a9d4-3dae-4000-a303-6afc0ef95463,},Annotations:map[string]string{io.kubernetes.container.hash: 84087d24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1,PodSandboxId:1d908aac8eb9bda9cbe28b03c26937769250dfde6704641531e496f407babf46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714396672243588284,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rz7lc,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 729af88d-aabc-412e-a4ba-e6fde2391fe5,},Annotations:map[string]string{io.kubernetes.container.hash: cc2f68d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0,PodSandboxId:0f11f8fd000fcc540ecb3a3aea28fd8d5af32ed10ff62e0fb484e686cb9d8214,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714396671976942204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8fr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1fe59-3774-4270-84bc
-d3028250d27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9b963d87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09,PodSandboxId:01572ba0feef08ad7be3d0ecce5ede618fb391a5ea97278abd0d4a25cf69f765,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714396651121189251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44cba3dcb77ac10b484eca740d179455,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288,PodSandboxId:d6b1c1702d9cc54dfa2a2d3eb681513a7bf47dd2f91854ca800e73a80099a64f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714396651126107002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123c280b9f7e097c7ec5c6fc047b24f5,},Annotations:map[string]string{io.kubernetes.
container.hash: fb40558c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd,PodSandboxId:0022be900755ade00873ea0a264f087b7fe5f1b6957e7ab53db0cd074a0b1c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714396651067668609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1e40396d84ae68693fe89997290741,},Annotations:map[string]string{io.kubernetes.container.hash:
698106f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8,PodSandboxId:b28561774589cb34fedb27f30ef850854aaa5a48ec751f772ee01e6bdf2b5e28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714396651062791347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-404116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f62ebd32fa6ddc28c071e323045845,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cbcc74c-e393-4282-ace3-98b45d70521c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f958c0312feae       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   29f53cb24b75d       busybox-fc5497c4f-qv47r
	d3d662af5e7ed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   8454194fb8485       coredns-7db6d8ff4d-mmfbk
	303bc49134b18       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   65ae91209d13b       kindnet-f8fr7
	c65216a06dd7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   974d2f9aea03e       storage-provisioner
	e7f592c1524d6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   0928b94fea1b5       kube-proxy-rz7lc
	82d8672ffe502       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   c49a3d526293b       kube-scheduler-multinode-404116
	d6c6430f33020       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   1310d71b84ed2       etcd-multinode-404116
	9db65e6fb1cd0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   1c1099fbf7e5a       kube-controller-manager-multinode-404116
	9bc639c8a4d93       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   2a043ae3264a5       kube-apiserver-multinode-404116
	73a7bc8db9dd1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   9b2f791d41c72       busybox-fc5497c4f-qv47r
	44ee48270c02d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   76b2c74a9b1b8       coredns-7db6d8ff4d-mmfbk
	ae888ddcc9cdb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   e77c634d7b435       storage-provisioner
	e8798b622aa8f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   1d908aac8eb9b       kube-proxy-rz7lc
	b0e4f3651130b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   0f11f8fd000fc       kindnet-f8fr7
	a1fd6f8fc5902       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   d6b1c1702d9cc       etcd-multinode-404116
	429fa04058735       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   01572ba0feef0       kube-scheduler-multinode-404116
	80662b05a48fd       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   0022be900755a       kube-apiserver-multinode-404116
	972052fbdfae7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   b28561774589c       kube-controller-manager-multinode-404116
	
	
	==> coredns [44ee48270c02d642dd1ea4f9311d3c5435cef2150e8cade77c3fed588bd77649] <==
	[INFO] 10.244.0.3:44073 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002033086s
	[INFO] 10.244.0.3:39857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101505s
	[INFO] 10.244.0.3:37799 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051577s
	[INFO] 10.244.0.3:47790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001609677s
	[INFO] 10.244.0.3:43843 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061147s
	[INFO] 10.244.0.3:38165 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094063s
	[INFO] 10.244.0.3:59070 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041907s
	[INFO] 10.244.1.2:47443 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161709s
	[INFO] 10.244.1.2:52611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013738s
	[INFO] 10.244.1.2:41816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078439s
	[INFO] 10.244.1.2:46590 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159774s
	[INFO] 10.244.0.3:49983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098713s
	[INFO] 10.244.0.3:50156 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235118s
	[INFO] 10.244.0.3:48129 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066664s
	[INFO] 10.244.0.3:60037 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084661s
	[INFO] 10.244.1.2:38798 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172291s
	[INFO] 10.244.1.2:43137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286744s
	[INFO] 10.244.1.2:41187 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141695s
	[INFO] 10.244.1.2:53196 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153859s
	[INFO] 10.244.0.3:47734 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113019s
	[INFO] 10.244.0.3:43346 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094125s
	[INFO] 10.244.0.3:41390 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079452s
	[INFO] 10.244.0.3:58818 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d3d662af5e7ed78e0f103b8b9b7f7b2d833dce172f804a3699fedbaa5dc77ef4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46565 - 46945 "HINFO IN 6282464484539575923.7220632988662159516. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035039689s
	
	
	==> describe nodes <==
	Name:               multinode-404116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-404116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=multinode-404116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T13_17_37_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-404116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:27:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:23:51 +0000   Mon, 29 Apr 2024 13:17:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    multinode-404116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f8c851a6b7a4aa0bd6b1654a3273021
	  System UUID:                9f8c851a-6b7a-4aa0-bd6b-1654a3273021
	  Boot ID:                    63737963-1177-4fbc-9a7a-2c3628aec3ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qv47r                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 coredns-7db6d8ff4d-mmfbk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m46s
	  kube-system                 etcd-multinode-404116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-f8fr7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m46s
	  kube-system                 kube-apiserver-multinode-404116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-404116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-rz7lc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-scheduler-multinode-404116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m43s                  kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-404116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-404116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-404116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-404116 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-404116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-404116 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m47s                  node-controller  Node multinode-404116 event: Registered Node multinode-404116 in Controller
	  Normal  NodeReady                9m43s                  kubelet          Node multinode-404116 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-404116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-404116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-404116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-404116 event: Registered Node multinode-404116 in Controller
	
	
	Name:               multinode-404116-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-404116-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=multinode-404116
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T13_24_31_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:24:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-404116-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:25:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:25:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:25:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:25:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 13:25:01 +0000   Mon, 29 Apr 2024 13:25:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    multinode-404116-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66ebf1df42804f89a7563922d8e28417
	  System UUID:                66ebf1df-4280-4f89-a756-3922d8e28417
	  Boot ID:                    e350f6b9-8643-4840-ac6e-3ca0f96041b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s4jxr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-gg2jn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m9s
	  kube-system                 kube-proxy-w7rmz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m                     kube-proxy       
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m10s (x2 over 9m10s)  kubelet          Node multinode-404116-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m10s (x2 over 9m10s)  kubelet          Node multinode-404116-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m10s (x2 over 9m10s)  kubelet          Node multinode-404116-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m1s                   kubelet          Node multinode-404116-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)    kubelet          Node multinode-404116-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)    kubelet          Node multinode-404116-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)    kubelet          Node multinode-404116-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-404116-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-404116-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.064195] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075164] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.178778] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.169832] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.317109] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.733022] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.069948] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.539470] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.519477] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.062117] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.083406] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.197283] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.109147] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.166042] kauditd_printk_skb: 80 callbacks suppressed
	[Apr29 13:23] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[  +0.159669] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.190183] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.151119] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.319070] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.824509] systemd-fstab-generator[2921]: Ignoring "noauto" option for root device
	[  +2.071430] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +5.737680] kauditd_printk_skb: 184 callbacks suppressed
	[Apr29 13:24] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.879817] systemd-fstab-generator[3856]: Ignoring "noauto" option for root device
	[ +21.328173] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [a1fd6f8fc5902eedaed3e7986622e04992796768f9ce97f6e3bb18f0974ae288] <==
	{"level":"info","ts":"2024-04-29T13:17:31.770864Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:17:31.77588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T13:17:31.789318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:17:31.789422Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:18:27.026878Z","caller":"traceutil/trace.go:171","msg":"trace[790673457] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"111.157391ms","start":"2024-04-29T13:18:26.915561Z","end":"2024-04-29T13:18:27.026718Z","steps":["trace[790673457] 'process raft request'  (duration: 111.110794ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:18:27.027137Z","caller":"traceutil/trace.go:171","msg":"trace[90457516] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"178.809661ms","start":"2024-04-29T13:18:26.848318Z","end":"2024-04-29T13:18:27.027128Z","steps":["trace[90457516] 'process raft request'  (duration: 172.426634ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:18:27.027308Z","caller":"traceutil/trace.go:171","msg":"trace[1666134087] linearizableReadLoop","detail":"{readStateIndex:504; appliedIndex:503; }","duration":"173.460642ms","start":"2024-04-29T13:18:26.85384Z","end":"2024-04-29T13:18:27.027301Z","steps":["trace[1666134087] 'read index received'  (duration: 166.916513ms)","trace[1666134087] 'applied index is now lower than readState.Index'  (duration: 6.543343ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T13:18:27.0275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.562284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T13:18:27.027573Z","caller":"traceutil/trace.go:171","msg":"trace[547154128] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"173.750794ms","start":"2024-04-29T13:18:26.853816Z","end":"2024-04-29T13:18:27.027567Z","steps":["trace[547154128] 'agreement among raft nodes before linearized reading'  (duration: 173.536013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T13:19:13.78466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.452568ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10801196691884090663 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-404116-m03.17cac2c398e7b428\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-404116-m03.17cac2c398e7b428\" value_size:640 lease:1577824655029314563 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T13:19:13.785054Z","caller":"traceutil/trace.go:171","msg":"trace[742885260] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"209.551363ms","start":"2024-04-29T13:19:13.575488Z","end":"2024-04-29T13:19:13.78504Z","steps":["trace[742885260] 'process raft request'  (duration: 209.491473ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:19:13.785055Z","caller":"traceutil/trace.go:171","msg":"trace[2102936144] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"241.892328ms","start":"2024-04-29T13:19:13.54308Z","end":"2024-04-29T13:19:13.784972Z","steps":["trace[2102936144] 'process raft request'  (duration: 118.302847ms)","trace[2102936144] 'compare'  (duration: 122.322474ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T13:19:13.785094Z","caller":"traceutil/trace.go:171","msg":"trace[389221968] linearizableReadLoop","detail":"{readStateIndex:645; appliedIndex:644; }","duration":"240.502652ms","start":"2024-04-29T13:19:13.544585Z","end":"2024-04-29T13:19:13.785088Z","steps":["trace[389221968] 'read index received'  (duration: 116.737185ms)","trace[389221968] 'applied index is now lower than readState.Index'  (duration: 123.764318ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T13:19:13.785184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.580546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-404116-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-29T13:19:13.785531Z","caller":"traceutil/trace.go:171","msg":"trace[697336960] range","detail":"{range_begin:/registry/minions/multinode-404116-m03; range_end:; response_count:1; response_revision:612; }","duration":"240.960604ms","start":"2024-04-29T13:19:13.544555Z","end":"2024-04-29T13:19:13.785516Z","steps":["trace[697336960] 'agreement among raft nodes before linearized reading'  (duration: 240.548002ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:22:11.369101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T13:22:11.369422Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-404116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.179:2380"],"advertise-client-urls":["https://192.168.39.179:2379"]}
	{"level":"warn","ts":"2024-04-29T13:22:11.369596Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T13:22:11.369729Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T13:22:11.466969Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.179:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T13:22:11.467127Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.179:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T13:22:11.468619Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9edf382f8ea095e5","current-leader-member-id":"9edf382f8ea095e5"}
	{"level":"info","ts":"2024-04-29T13:22:11.472251Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:22:11.472418Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:22:11.472446Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-404116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.179:2380"],"advertise-client-urls":["https://192.168.39.179:2379"]}
	
	
	==> etcd [d6c6430f33020eeb286e097c9a5e6db22de1ede43685ffe82d0e887c74217966] <==
	{"level":"info","ts":"2024-04-29T13:23:48.220702Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:23:48.220808Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:23:48.22117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 switched to configuration voters=(11447930554706597349)"}
	{"level":"info","ts":"2024-04-29T13:23:48.22446Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b3e38a398ac243f2","local-member-id":"9edf382f8ea095e5","added-peer-id":"9edf382f8ea095e5","added-peer-peer-urls":["https://192.168.39.179:2380"]}
	{"level":"info","ts":"2024-04-29T13:23:48.224762Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b3e38a398ac243f2","local-member-id":"9edf382f8ea095e5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:23:48.224859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:23:48.243987Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:23:48.244272Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:23:48.244304Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.179:2380"}
	{"level":"info","ts":"2024-04-29T13:23:48.244567Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9edf382f8ea095e5","initial-advertise-peer-urls":["https://192.168.39.179:2380"],"listen-peer-urls":["https://192.168.39.179:2380"],"advertise-client-urls":["https://192.168.39.179:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.179:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:23:48.24461Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:23:49.599373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T13:23:49.599464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T13:23:49.599502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 received MsgPreVoteResp from 9edf382f8ea095e5 at term 2"}
	{"level":"info","ts":"2024-04-29T13:23:49.599514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.599519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 received MsgVoteResp from 9edf382f8ea095e5 at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.599544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9edf382f8ea095e5 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.599555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9edf382f8ea095e5 elected leader 9edf382f8ea095e5 at term 3"}
	{"level":"info","ts":"2024-04-29T13:23:49.608488Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9edf382f8ea095e5","local-member-attributes":"{Name:multinode-404116 ClientURLs:[https://192.168.39.179:2379]}","request-path":"/0/members/9edf382f8ea095e5/attributes","cluster-id":"b3e38a398ac243f2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:23:49.608528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:23:49.608696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:23:49.609174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:23:49.609294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:23:49.611122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.179:2379"}
	{"level":"info","ts":"2024-04-29T13:23:49.611399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:27:36 up 10 min,  0 users,  load average: 0.29, 0.28, 0.17
	Linux multinode-404116 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [303bc49134b18ab01d27cbb25c508e809974584a2f6ef753852386906365b945] <==
	I0429 13:26:33.669553       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:26:43.674031       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:26:43.674083       1 main.go:227] handling current node
	I0429 13:26:43.674093       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:26:43.674099       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:26:53.734950       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:26:53.734990       1 main.go:227] handling current node
	I0429 13:26:53.735001       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:26:53.735006       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:27:03.746092       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:27:03.746297       1 main.go:227] handling current node
	I0429 13:27:03.746326       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:27:03.746351       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:27:13.760522       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:27:13.760629       1 main.go:227] handling current node
	I0429 13:27:13.760661       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:27:13.760680       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:27:23.767489       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:27:23.767619       1 main.go:227] handling current node
	I0429 13:27:23.767655       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:27:23.767678       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:27:33.773986       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:27:33.774042       1 main.go:227] handling current node
	I0429 13:27:33.774071       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:27:33.774078       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b0e4f3651130b07ae7bdfa44b5e844b85cc461cbda2d26ca9e65adeab81194e0] <==
	I0429 13:21:23.162092       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:21:33.173719       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:21:33.173928       1 main.go:227] handling current node
	I0429 13:21:33.173977       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:21:33.173998       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:21:33.174169       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:21:33.174190       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:21:43.188675       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:21:43.188824       1 main.go:227] handling current node
	I0429 13:21:43.188857       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:21:43.188878       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:21:43.189051       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:21:43.189075       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:21:53.194099       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:21:53.194143       1 main.go:227] handling current node
	I0429 13:21:53.194154       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:21:53.194159       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:21:53.194315       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:21:53.194339       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	I0429 13:22:03.204391       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0429 13:22:03.204433       1 main.go:227] handling current node
	I0429 13:22:03.205042       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 13:22:03.205110       1 main.go:250] Node multinode-404116-m02 has CIDR [10.244.1.0/24] 
	I0429 13:22:03.205502       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0429 13:22:03.205556       1 main.go:250] Node multinode-404116-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [80662b05a48fdddd10e9266ed5d06253c3dfd55110de1153774fafe11cdb7abd] <==
	E0429 13:22:11.404933       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.405005       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.405108       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.405176       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.405472       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.406461       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.406667       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.407334       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.407715       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.407786       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.407828       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408069       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.408176       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.407555       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408390       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408590       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.408949       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.410550       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.411409       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.407848       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.411519       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 13:22:11.411582       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 13:22:11.411720       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 13:22:11.411857       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 13:22:11.411936       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9bc639c8a4d935ee1fb593ca596dcf1c147644b42f8d03bacc53d85363ace5c6] <==
	I0429 13:23:51.218666       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 13:23:51.218835       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 13:23:51.218905       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 13:23:51.219753       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 13:23:51.219409       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 13:23:51.219478       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 13:23:51.219964       1 aggregator.go:165] initial CRD sync complete...
	I0429 13:23:51.219971       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 13:23:51.219977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 13:23:51.219983       1 cache.go:39] Caches are synced for autoregister controller
	I0429 13:23:51.220268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 13:23:51.228702       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 13:23:51.229444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 13:23:51.241676       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 13:23:51.241738       1 policy_source.go:224] refreshing policies
	I0429 13:23:51.241844       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 13:23:51.283129       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 13:23:52.021853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 13:23:53.890523       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 13:23:54.035088       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 13:23:54.065922       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 13:23:54.182824       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 13:23:54.205827       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 13:24:03.531146       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 13:24:03.640124       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [972052fbdfae726267d5762c3f1595a08297beae93b822bd328f77134b8d95b8] <==
	I0429 13:18:27.030974       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m02\" does not exist"
	I0429 13:18:27.098993       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m02" podCIDRs=["10.244.1.0/24"]
	I0429 13:18:29.442804       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-404116-m02"
	I0429 13:18:35.661927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:18:37.885960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.416925ms"
	I0429 13:18:37.908619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.451743ms"
	I0429 13:18:37.929064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.387778ms"
	I0429 13:18:37.929173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.197µs"
	I0429 13:18:40.000043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.642481ms"
	I0429 13:18:40.000292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.34µs"
	I0429 13:18:40.137140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.60634ms"
	I0429 13:18:40.138257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.953µs"
	I0429 13:19:13.790054       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:13.792319       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m03\" does not exist"
	I0429 13:19:13.803596       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m03" podCIDRs=["10.244.2.0/24"]
	I0429 13:19:14.459486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-404116-m03"
	I0429 13:19:24.128069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:54.805379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:56.091643       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m03\" does not exist"
	I0429 13:19:56.091744       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:19:56.103149       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m03" podCIDRs=["10.244.3.0/24"]
	I0429 13:20:04.502601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:20:49.513440       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m03"
	I0429 13:20:49.582517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.956745ms"
	I0429 13:20:49.582754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.959µs"
	
	
	==> kube-controller-manager [9db65e6fb1cd05be70553bc89450c268222f66155dd52b934ae149c330710c40] <==
	I0429 13:24:30.856846       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m02" podCIDRs=["10.244.1.0/24"]
	I0429 13:24:32.728829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="944.96µs"
	I0429 13:24:32.776837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.625µs"
	I0429 13:24:32.791927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.239µs"
	I0429 13:24:32.821130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.546µs"
	I0429 13:24:32.832291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.56µs"
	I0429 13:24:32.843083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.977µs"
	I0429 13:24:34.620474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.92µs"
	I0429 13:24:39.186454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:24:39.225431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="489.82µs"
	I0429 13:24:39.240601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.692µs"
	I0429 13:24:41.224874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.16068ms"
	I0429 13:24:41.226082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.806µs"
	I0429 13:24:58.778782       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:24:59.920925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-404116-m03\" does not exist"
	I0429 13:24:59.924089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:24:59.935631       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-404116-m03" podCIDRs=["10.244.2.0/24"]
	I0429 13:25:08.344554       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:25:14.111783       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-404116-m02"
	I0429 13:25:53.805513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.08143ms"
	I0429 13:25:53.806404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.598µs"
	I0429 13:26:03.537100       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-pzf28"
	I0429 13:26:03.586446       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-pzf28"
	I0429 13:26:03.586494       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5fn5l"
	I0429 13:26:03.639379       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5fn5l"
	
	
	==> kube-proxy [e7f592c1524d6f6786827215f494346098e2b11ca8edae17f5a8cf1518cd6e2b] <==
	I0429 13:23:52.662333       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:23:52.755097       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.179"]
	I0429 13:23:52.879702       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:23:52.879805       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:23:52.879837       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:23:52.885399       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:23:52.885623       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:23:52.885658       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:23:52.887824       1 config.go:192] "Starting service config controller"
	I0429 13:23:52.887869       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:23:52.887906       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:23:52.887910       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:23:52.889604       1 config.go:319] "Starting node config controller"
	I0429 13:23:52.889631       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:23:52.988832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:23:52.988927       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:23:52.990354       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e8798b622aa8ff8bd9ad4110fef152fb2d1d8a8bfc7abf4afb1fe1f4f82e55e1] <==
	I0429 13:17:52.408262       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:17:52.423860       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.179"]
	I0429 13:17:52.473813       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:17:52.473908       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:17:52.473938       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:17:52.476973       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:17:52.477383       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:17:52.477601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:17:52.479811       1 config.go:192] "Starting service config controller"
	I0429 13:17:52.479869       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:17:52.479927       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:17:52.479945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:17:52.480658       1 config.go:319] "Starting node config controller"
	I0429 13:17:52.480725       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:17:52.580162       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:17:52.580392       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:17:52.580858       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [429fa04058735787479db7cff7f4d375aa8fdedc6f889e8d769cb1ea464bed09] <==
	E0429 13:17:34.966682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 13:17:34.979284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 13:17:34.979339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 13:17:34.981860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 13:17:34.981908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 13:17:35.137686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 13:17:35.137745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 13:17:35.166098       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 13:17:35.166266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 13:17:35.184991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 13:17:35.185099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 13:17:35.202385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 13:17:35.202436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 13:17:35.295459       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 13:17:35.295509       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:17:35.408487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 13:17:35.408594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 13:17:35.437613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 13:17:35.438174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 13:17:35.453388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 13:17:35.453503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0429 13:17:38.413022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 13:22:11.364886       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 13:22:11.365489       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0429 13:22:11.365587       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [82d8672ffe5025b6c6f5507368b4249ed2d52ed700e6bc9f41e7aa8a4ae4e676] <==
	I0429 13:23:48.878298       1 serving.go:380] Generated self-signed cert in-memory
	W0429 13:23:51.060592       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 13:23:51.060691       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:23:51.060704       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 13:23:51.060710       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 13:23:51.192361       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 13:23:51.192478       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:23:51.196967       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 13:23:51.197006       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 13:23:51.199581       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 13:23:51.199707       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 13:23:51.297862       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.638433    3055 topology_manager.go:215] "Topology Admit Handler" podUID="93b1fe59-3774-4270-84bc-d3028250d27e" podNamespace="kube-system" podName="kindnet-f8fr7"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.638613    3055 topology_manager.go:215] "Topology Admit Handler" podUID="749d96cc-d7ac-4204-8508-554f13dd2f79" podNamespace="default" podName="busybox-fc5497c4f-qv47r"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.661027    3055 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679393    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/729af88d-aabc-412e-a4ba-e6fde2391fe5-xtables-lock\") pod \"kube-proxy-rz7lc\" (UID: \"729af88d-aabc-412e-a4ba-e6fde2391fe5\") " pod="kube-system/kube-proxy-rz7lc"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679427    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc25a9d4-3dae-4000-a303-6afc0ef95463-tmp\") pod \"storage-provisioner\" (UID: \"fc25a9d4-3dae-4000-a303-6afc0ef95463\") " pod="kube-system/storage-provisioner"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679501    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b1fe59-3774-4270-84bc-d3028250d27e-lib-modules\") pod \"kindnet-f8fr7\" (UID: \"93b1fe59-3774-4270-84bc-d3028250d27e\") " pod="kube-system/kindnet-f8fr7"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679532    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/729af88d-aabc-412e-a4ba-e6fde2391fe5-lib-modules\") pod \"kube-proxy-rz7lc\" (UID: \"729af88d-aabc-412e-a4ba-e6fde2391fe5\") " pod="kube-system/kube-proxy-rz7lc"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679561    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b1fe59-3774-4270-84bc-d3028250d27e-xtables-lock\") pod \"kindnet-f8fr7\" (UID: \"93b1fe59-3774-4270-84bc-d3028250d27e\") " pod="kube-system/kindnet-f8fr7"
	Apr 29 13:23:51 multinode-404116 kubelet[3055]: I0429 13:23:51.679576    3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93b1fe59-3774-4270-84bc-d3028250d27e-cni-cfg\") pod \"kindnet-f8fr7\" (UID: \"93b1fe59-3774-4270-84bc-d3028250d27e\") " pod="kube-system/kindnet-f8fr7"
	Apr 29 13:23:58 multinode-404116 kubelet[3055]: I0429 13:23:58.624800    3055 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 29 13:24:46 multinode-404116 kubelet[3055]: E0429 13:24:46.743126    3055 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:24:46 multinode-404116 kubelet[3055]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:25:46 multinode-404116 kubelet[3055]: E0429 13:25:46.741143    3055 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:25:46 multinode-404116 kubelet[3055]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:25:46 multinode-404116 kubelet[3055]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:25:46 multinode-404116 kubelet[3055]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:25:46 multinode-404116 kubelet[3055]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:26:46 multinode-404116 kubelet[3055]: E0429 13:26:46.742537    3055 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:26:46 multinode-404116 kubelet[3055]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:26:46 multinode-404116 kubelet[3055]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:26:46 multinode-404116 kubelet[3055]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:26:46 multinode-404116 kubelet[3055]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:27:35.215783  890667 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18773-847310/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-404116 -n multinode-404116
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-404116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.75s)

                                                
                                    
x
+
TestPreload (272.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-017424 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-017424 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.095944809s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-017424 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-017424
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-017424: exit status 82 (2m0.535568643s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-017424"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-017424 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-29 13:35:32.748919895 +0000 UTC m=+5836.215722508
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-017424 -n test-preload-017424
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-017424 -n test-preload-017424: exit status 3 (18.696317064s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:35:51.439895  893513 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host
	E0429 13:35:51.439921  893513 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-017424" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-017424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-017424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-017424: (1.011312028s)
--- FAIL: TestPreload (272.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m40.463172418s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-092103] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-092103" primary control-plane node in "kubernetes-upgrade-092103" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:40:29.374120  900446 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:40:29.374476  900446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:40:29.374489  900446 out.go:304] Setting ErrFile to fd 2...
	I0429 13:40:29.374495  900446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:40:29.374781  900446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:40:29.375713  900446 out.go:298] Setting JSON to false
	I0429 13:40:29.377214  900446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":80574,"bootTime":1714317455,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:40:29.377314  900446 start.go:139] virtualization: kvm guest
	I0429 13:40:29.379921  900446 out.go:177] * [kubernetes-upgrade-092103] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:40:29.381469  900446 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:40:29.381548  900446 notify.go:220] Checking for updates...
	I0429 13:40:29.382989  900446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:40:29.384403  900446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:40:29.385831  900446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:40:29.387248  900446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:40:29.388604  900446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:40:29.390799  900446 config.go:182] Loaded profile config "NoKubernetes-492236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0429 13:40:29.390962  900446 config.go:182] Loaded profile config "cert-expiration-512362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:40:29.391140  900446 config.go:182] Loaded profile config "running-upgrade-396169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0429 13:40:29.391293  900446 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:40:29.438244  900446 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 13:40:29.439534  900446 start.go:297] selected driver: kvm2
	I0429 13:40:29.439560  900446 start.go:901] validating driver "kvm2" against <nil>
	I0429 13:40:29.439591  900446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:40:29.440441  900446 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:40:29.440544  900446 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 13:40:29.458600  900446 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 13:40:29.458686  900446 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 13:40:29.458993  900446 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 13:40:29.459064  900446 cni.go:84] Creating CNI manager for ""
	I0429 13:40:29.459079  900446 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:40:29.459087  900446 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 13:40:29.459161  900446 start.go:340] cluster config:
	{Name:kubernetes-upgrade-092103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-092103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:40:29.459276  900446 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:40:29.462109  900446 out.go:177] * Starting "kubernetes-upgrade-092103" primary control-plane node in "kubernetes-upgrade-092103" cluster
	I0429 13:40:29.463323  900446 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 13:40:29.463408  900446 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 13:40:29.463440  900446 cache.go:56] Caching tarball of preloaded images
	I0429 13:40:29.463539  900446 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 13:40:29.463551  900446 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 13:40:29.463647  900446 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/config.json ...
	I0429 13:40:29.463692  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/config.json: {Name:mk312ec4b84a2eb546ef30f5846f5652bd64b28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:40:29.463879  900446 start.go:360] acquireMachinesLock for kubernetes-upgrade-092103: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:40:36.001483  900446 start.go:364] duration metric: took 6.537554131s to acquireMachinesLock for "kubernetes-upgrade-092103"
	I0429 13:40:36.001569  900446 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-092103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-092103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:40:36.001702  900446 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 13:40:36.004040  900446 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 13:40:36.004328  900446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:40:36.004393  900446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:40:36.025448  900446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0429 13:40:36.026030  900446 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:40:36.026795  900446 main.go:141] libmachine: Using API Version  1
	I0429 13:40:36.026824  900446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:40:36.027285  900446 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:40:36.027657  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetMachineName
	I0429 13:40:36.027855  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:40:36.028096  900446 start.go:159] libmachine.API.Create for "kubernetes-upgrade-092103" (driver="kvm2")
	I0429 13:40:36.028136  900446 client.go:168] LocalClient.Create starting
	I0429 13:40:36.028191  900446 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 13:40:36.028262  900446 main.go:141] libmachine: Decoding PEM data...
	I0429 13:40:36.028299  900446 main.go:141] libmachine: Parsing certificate...
	I0429 13:40:36.028382  900446 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 13:40:36.028408  900446 main.go:141] libmachine: Decoding PEM data...
	I0429 13:40:36.028420  900446 main.go:141] libmachine: Parsing certificate...
	I0429 13:40:36.028444  900446 main.go:141] libmachine: Running pre-create checks...
	I0429 13:40:36.028458  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .PreCreateCheck
	I0429 13:40:36.028988  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetConfigRaw
	I0429 13:40:36.029563  900446 main.go:141] libmachine: Creating machine...
	I0429 13:40:36.029596  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .Create
	I0429 13:40:36.030098  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Creating KVM machine...
	I0429 13:40:36.031961  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found existing default KVM network
	I0429 13:40:36.033934  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:36.033717  900529 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:a4:59} reservation:<nil>}
	I0429 13:40:36.035450  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:36.035302  900529 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205590}
	I0429 13:40:36.035502  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | created network xml: 
	I0429 13:40:36.035546  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | <network>
	I0429 13:40:36.035564  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |   <name>mk-kubernetes-upgrade-092103</name>
	I0429 13:40:36.035582  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |   <dns enable='no'/>
	I0429 13:40:36.035592  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |   
	I0429 13:40:36.035610  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0429 13:40:36.035631  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |     <dhcp>
	I0429 13:40:36.035649  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0429 13:40:36.035665  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |     </dhcp>
	I0429 13:40:36.035676  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |   </ip>
	I0429 13:40:36.035690  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG |   
	I0429 13:40:36.035700  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | </network>
	I0429 13:40:36.035713  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | 
	I0429 13:40:36.043218  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | trying to create private KVM network mk-kubernetes-upgrade-092103 192.168.50.0/24...
	I0429 13:40:36.162340  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | private KVM network mk-kubernetes-upgrade-092103 192.168.50.0/24 created
	I0429 13:40:36.162384  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103 ...
	I0429 13:40:36.162401  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:36.162286  900529 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:40:36.162424  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 13:40:36.162442  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 13:40:36.460860  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:36.460662  900529 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa...
	I0429 13:40:36.574380  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:36.574184  900529 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/kubernetes-upgrade-092103.rawdisk...
	I0429 13:40:36.574424  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Writing magic tar header
	I0429 13:40:36.574471  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Writing SSH key tar header
	I0429 13:40:36.574528  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:36.574319  900529 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103 ...
	I0429 13:40:36.574549  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103 (perms=drwx------)
	I0429 13:40:36.574587  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103
	I0429 13:40:36.574612  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 13:40:36.574628  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 13:40:36.574642  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:40:36.574651  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 13:40:36.574664  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 13:40:36.574679  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 13:40:36.574694  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 13:40:36.574706  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 13:40:36.574717  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 13:40:36.574727  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Creating domain...
	I0429 13:40:36.574768  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home/jenkins
	I0429 13:40:36.574802  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Checking permissions on dir: /home
	I0429 13:40:36.574817  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Skipping /home - not owner
	I0429 13:40:36.576076  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) define libvirt domain using xml: 
	I0429 13:40:36.576101  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) <domain type='kvm'>
	I0429 13:40:36.576112  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <name>kubernetes-upgrade-092103</name>
	I0429 13:40:36.576126  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <memory unit='MiB'>2200</memory>
	I0429 13:40:36.576136  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <vcpu>2</vcpu>
	I0429 13:40:36.576144  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <features>
	I0429 13:40:36.576155  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <acpi/>
	I0429 13:40:36.576171  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <apic/>
	I0429 13:40:36.576184  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <pae/>
	I0429 13:40:36.576208  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     
	I0429 13:40:36.576218  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   </features>
	I0429 13:40:36.576225  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <cpu mode='host-passthrough'>
	I0429 13:40:36.576233  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   
	I0429 13:40:36.576246  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   </cpu>
	I0429 13:40:36.576260  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <os>
	I0429 13:40:36.576268  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <type>hvm</type>
	I0429 13:40:36.576273  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <boot dev='cdrom'/>
	I0429 13:40:36.576278  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <boot dev='hd'/>
	I0429 13:40:36.576284  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <bootmenu enable='no'/>
	I0429 13:40:36.576288  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   </os>
	I0429 13:40:36.576293  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   <devices>
	I0429 13:40:36.576298  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <disk type='file' device='cdrom'>
	I0429 13:40:36.576308  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/boot2docker.iso'/>
	I0429 13:40:36.576325  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <target dev='hdc' bus='scsi'/>
	I0429 13:40:36.576337  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <readonly/>
	I0429 13:40:36.576348  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </disk>
	I0429 13:40:36.576358  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <disk type='file' device='disk'>
	I0429 13:40:36.576372  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 13:40:36.576385  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/kubernetes-upgrade-092103.rawdisk'/>
	I0429 13:40:36.576397  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <target dev='hda' bus='virtio'/>
	I0429 13:40:36.576431  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </disk>
	I0429 13:40:36.576461  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <interface type='network'>
	I0429 13:40:36.576474  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <source network='mk-kubernetes-upgrade-092103'/>
	I0429 13:40:36.576486  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <model type='virtio'/>
	I0429 13:40:36.576496  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </interface>
	I0429 13:40:36.576508  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <interface type='network'>
	I0429 13:40:36.576522  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <source network='default'/>
	I0429 13:40:36.576538  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <model type='virtio'/>
	I0429 13:40:36.576552  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </interface>
	I0429 13:40:36.576563  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <serial type='pty'>
	I0429 13:40:36.576573  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <target port='0'/>
	I0429 13:40:36.576584  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </serial>
	I0429 13:40:36.576592  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <console type='pty'>
	I0429 13:40:36.576605  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <target type='serial' port='0'/>
	I0429 13:40:36.576613  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </console>
	I0429 13:40:36.576625  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     <rng model='virtio'>
	I0429 13:40:36.576640  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)       <backend model='random'>/dev/random</backend>
	I0429 13:40:36.576651  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     </rng>
	I0429 13:40:36.576672  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     
	I0429 13:40:36.576688  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)     
	I0429 13:40:36.576700  900446 main.go:141] libmachine: (kubernetes-upgrade-092103)   </devices>
	I0429 13:40:36.576709  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) </domain>
	I0429 13:40:36.576721  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) 
	I0429 13:40:36.581584  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:73:3d:81 in network default
	I0429 13:40:36.582293  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:36.582317  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Ensuring networks are active...
	I0429 13:40:36.583313  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Ensuring network default is active
	I0429 13:40:36.583744  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Ensuring network mk-kubernetes-upgrade-092103 is active
	I0429 13:40:36.584413  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Getting domain xml...
	I0429 13:40:36.585270  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Creating domain...
	I0429 13:40:38.187805  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Waiting to get IP...
	I0429 13:40:38.188938  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:38.189497  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:38.189536  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:38.189484  900529 retry.go:31] will retry after 248.115905ms: waiting for machine to come up
	I0429 13:40:38.439147  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:38.439781  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:38.439809  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:38.439744  900529 retry.go:31] will retry after 237.828134ms: waiting for machine to come up
	I0429 13:40:38.679683  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:38.680396  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:38.680435  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:38.680331  900529 retry.go:31] will retry after 320.610134ms: waiting for machine to come up
	I0429 13:40:39.003445  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:39.004063  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:39.004095  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:39.004011  900529 retry.go:31] will retry after 471.088456ms: waiting for machine to come up
	I0429 13:40:39.960783  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:39.961619  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:39.961705  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:39.961579  900529 retry.go:31] will retry after 587.044445ms: waiting for machine to come up
	I0429 13:40:40.550718  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:40.551348  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:40.551385  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:40.551299  900529 retry.go:31] will retry after 572.915844ms: waiting for machine to come up
	I0429 13:40:41.126424  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:41.126985  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:41.127022  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:41.126926  900529 retry.go:31] will retry after 1.00885903s: waiting for machine to come up
	I0429 13:40:42.137731  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:42.138358  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:42.138389  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:42.138304  900529 retry.go:31] will retry after 1.406599339s: waiting for machine to come up
	I0429 13:40:43.547383  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:43.547965  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:43.547998  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:43.547905  900529 retry.go:31] will retry after 1.75539328s: waiting for machine to come up
	I0429 13:40:45.304733  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:45.305278  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:45.305307  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:45.305231  900529 retry.go:31] will retry after 1.780045919s: waiting for machine to come up
	I0429 13:40:47.087841  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:47.088511  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:47.088544  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:47.088449  900529 retry.go:31] will retry after 2.898951053s: waiting for machine to come up
	I0429 13:40:49.988638  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:49.989158  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:49.989187  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:49.989105  900529 retry.go:31] will retry after 2.547235974s: waiting for machine to come up
	I0429 13:40:52.539041  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:52.539500  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:52.539530  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:52.539453  900529 retry.go:31] will retry after 2.857302736s: waiting for machine to come up
	I0429 13:40:55.398210  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:55.398685  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find current IP address of domain kubernetes-upgrade-092103 in network mk-kubernetes-upgrade-092103
	I0429 13:40:55.398719  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | I0429 13:40:55.398635  900529 retry.go:31] will retry after 3.462559872s: waiting for machine to come up
	I0429 13:40:58.864604  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:58.865230  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Found IP for machine: 192.168.50.154
	I0429 13:40:58.865254  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Reserving static IP address...
	I0429 13:40:58.865270  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has current primary IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:58.865790  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-092103", mac: "52:54:00:d6:18:0c", ip: "192.168.50.154"} in network mk-kubernetes-upgrade-092103
	I0429 13:40:58.967672  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Getting to WaitForSSH function...
	I0429 13:40:58.967825  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Reserved static IP address: 192.168.50.154
	I0429 13:40:58.967850  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Waiting for SSH to be available...
	I0429 13:40:58.970797  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:58.971322  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:58.971353  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:58.971485  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Using SSH client type: external
	I0429 13:40:58.971511  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa (-rw-------)
	I0429 13:40:58.971556  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:40:58.971573  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | About to run SSH command:
	I0429 13:40:58.971598  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | exit 0
	I0429 13:40:59.109659  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | SSH cmd err, output: <nil>: 
	I0429 13:40:59.109977  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) KVM machine creation complete!
	I0429 13:40:59.110323  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetConfigRaw
	I0429 13:40:59.111070  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:40:59.111295  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:40:59.111493  900446 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 13:40:59.111517  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetState
	I0429 13:40:59.113286  900446 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 13:40:59.113329  900446 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 13:40:59.113338  900446 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 13:40:59.113348  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.117844  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.118673  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.118711  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.119120  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:40:59.119608  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.119873  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.120284  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:40:59.120642  900446 main.go:141] libmachine: Using SSH client type: native
	I0429 13:40:59.120981  900446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.154 22 <nil> <nil>}
	I0429 13:40:59.121007  900446 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 13:40:59.235694  900446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:40:59.235739  900446 main.go:141] libmachine: Detecting the provisioner...
	I0429 13:40:59.235753  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.239424  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.239947  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.240004  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.240190  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:40:59.240381  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.240567  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.240780  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:40:59.241031  900446 main.go:141] libmachine: Using SSH client type: native
	I0429 13:40:59.241217  900446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.154 22 <nil> <nil>}
	I0429 13:40:59.241229  900446 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 13:40:59.365212  900446 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 13:40:59.365357  900446 main.go:141] libmachine: found compatible host: buildroot
	I0429 13:40:59.365372  900446 main.go:141] libmachine: Provisioning with buildroot...
	I0429 13:40:59.365385  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetMachineName
	I0429 13:40:59.365724  900446 buildroot.go:166] provisioning hostname "kubernetes-upgrade-092103"
	I0429 13:40:59.365753  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetMachineName
	I0429 13:40:59.366008  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.370126  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.370627  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.370675  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.370957  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:40:59.371233  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.371450  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.371603  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:40:59.371848  900446 main.go:141] libmachine: Using SSH client type: native
	I0429 13:40:59.372117  900446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.154 22 <nil> <nil>}
	I0429 13:40:59.372138  900446 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-092103 && echo "kubernetes-upgrade-092103" | sudo tee /etc/hostname
	I0429 13:40:59.510327  900446 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-092103
	
	I0429 13:40:59.510378  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.513775  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.514163  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.514198  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.514441  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:40:59.514675  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.514871  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.515046  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:40:59.515300  900446 main.go:141] libmachine: Using SSH client type: native
	I0429 13:40:59.515547  900446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.154 22 <nil> <nil>}
	I0429 13:40:59.515567  900446 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-092103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-092103/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-092103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:40:59.650424  900446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:40:59.650462  900446 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:40:59.650487  900446 buildroot.go:174] setting up certificates
	I0429 13:40:59.650496  900446 provision.go:84] configureAuth start
	I0429 13:40:59.650505  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetMachineName
	I0429 13:40:59.651007  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetIP
	I0429 13:40:59.654097  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.654518  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.654544  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.654720  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.657627  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.658199  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.658230  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.658449  900446 provision.go:143] copyHostCerts
	I0429 13:40:59.658517  900446 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:40:59.658528  900446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:40:59.658590  900446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:40:59.658689  900446 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:40:59.658697  900446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:40:59.658718  900446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:40:59.658775  900446 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:40:59.658782  900446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:40:59.658800  900446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:40:59.658844  900446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-092103 san=[127.0.0.1 192.168.50.154 kubernetes-upgrade-092103 localhost minikube]
	I0429 13:40:59.786861  900446 provision.go:177] copyRemoteCerts
	I0429 13:40:59.786938  900446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:40:59.786974  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.789908  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.790333  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.790356  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.790649  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:40:59.790885  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.791058  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:40:59.791199  900446 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa Username:docker}
	I0429 13:40:59.880117  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:40:59.909203  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0429 13:40:59.939130  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:40:59.968685  900446 provision.go:87] duration metric: took 318.170659ms to configureAuth
	I0429 13:40:59.968725  900446 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:40:59.968924  900446 config.go:182] Loaded profile config "kubernetes-upgrade-092103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 13:40:59.969042  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:40:59.972206  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.972660  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:40:59.972706  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:40:59.973023  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:40:59.973263  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.973476  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:40:59.973705  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:40:59.973963  900446 main.go:141] libmachine: Using SSH client type: native
	I0429 13:40:59.974194  900446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.154 22 <nil> <nil>}
	I0429 13:40:59.974212  900446 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:41:00.266787  900446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:41:00.266836  900446 main.go:141] libmachine: Checking connection to Docker...
	I0429 13:41:00.266849  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetURL
	I0429 13:41:00.268450  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | Using libvirt version 6000000
	I0429 13:41:00.270799  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.271305  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.271335  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.271591  900446 main.go:141] libmachine: Docker is up and running!
	I0429 13:41:00.271612  900446 main.go:141] libmachine: Reticulating splines...
	I0429 13:41:00.271621  900446 client.go:171] duration metric: took 24.243471207s to LocalClient.Create
	I0429 13:41:00.271653  900446 start.go:167] duration metric: took 24.24355958s to libmachine.API.Create "kubernetes-upgrade-092103"
	I0429 13:41:00.271667  900446 start.go:293] postStartSetup for "kubernetes-upgrade-092103" (driver="kvm2")
	I0429 13:41:00.271682  900446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:41:00.271709  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:41:00.272040  900446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:41:00.272070  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:41:00.275093  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.275575  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.275611  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.275804  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:41:00.276092  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:41:00.276299  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:41:00.276457  900446 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa Username:docker}
	I0429 13:41:00.368320  900446 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:41:00.373734  900446 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:41:00.373768  900446 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:41:00.373841  900446 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:41:00.373936  900446 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:41:00.374054  900446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:41:00.387534  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:41:00.418859  900446 start.go:296] duration metric: took 147.173464ms for postStartSetup
	I0429 13:41:00.418926  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetConfigRaw
	I0429 13:41:00.419728  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetIP
	I0429 13:41:00.423347  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.423784  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.423819  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.424228  900446 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/config.json ...
	I0429 13:41:00.424459  900446 start.go:128] duration metric: took 24.422741816s to createHost
	I0429 13:41:00.424487  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:41:00.427847  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.428290  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.428327  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.428510  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:41:00.428807  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:41:00.429022  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:41:00.429214  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:41:00.429396  900446 main.go:141] libmachine: Using SSH client type: native
	I0429 13:41:00.429586  900446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.154 22 <nil> <nil>}
	I0429 13:41:00.429597  900446 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 13:41:00.544944  900446 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714398060.492849529
	
	I0429 13:41:00.544984  900446 fix.go:216] guest clock: 1714398060.492849529
	I0429 13:41:00.544995  900446 fix.go:229] Guest: 2024-04-29 13:41:00.492849529 +0000 UTC Remote: 2024-04-29 13:41:00.42447281 +0000 UTC m=+31.111353260 (delta=68.376719ms)
	I0429 13:41:00.545050  900446 fix.go:200] guest clock delta is within tolerance: 68.376719ms
	I0429 13:41:00.545056  900446 start.go:83] releasing machines lock for "kubernetes-upgrade-092103", held for 24.543535352s
	I0429 13:41:00.545088  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:41:00.545447  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetIP
	I0429 13:41:00.548348  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.548701  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.548745  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.548990  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:41:00.549651  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:41:00.549864  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .DriverName
	I0429 13:41:00.549971  900446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:41:00.550023  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:41:00.550117  900446 ssh_runner.go:195] Run: cat /version.json
	I0429 13:41:00.550153  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHHostname
	I0429 13:41:00.553189  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.553372  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.553587  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.553623  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.554024  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:41:00.554114  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:00.554143  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:00.554328  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHPort
	I0429 13:41:00.554388  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:41:00.554550  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:41:00.554570  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHKeyPath
	I0429 13:41:00.554767  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetSSHUsername
	I0429 13:41:00.554757  900446 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa Username:docker}
	I0429 13:41:00.554950  900446 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/kubernetes-upgrade-092103/id_rsa Username:docker}
	I0429 13:41:00.637700  900446 ssh_runner.go:195] Run: systemctl --version
	I0429 13:41:00.669212  900446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:41:00.844346  900446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:41:00.851775  900446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:41:00.851881  900446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:41:00.872116  900446 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:41:00.872146  900446 start.go:494] detecting cgroup driver to use...
	I0429 13:41:00.872224  900446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:41:00.896690  900446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:41:00.915944  900446 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:41:00.916023  900446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:41:00.932769  900446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:41:00.951255  900446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:41:01.088551  900446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:41:01.276757  900446 docker.go:233] disabling docker service ...
	I0429 13:41:01.276893  900446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:41:01.294089  900446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:41:01.313295  900446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:41:01.457147  900446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:41:01.615738  900446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:41:01.639627  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:41:01.662802  900446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 13:41:01.662886  900446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:41:01.675718  900446 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:41:01.675810  900446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:41:01.690439  900446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:41:01.704474  900446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:41:01.717870  900446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:41:01.731641  900446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:41:01.744022  900446 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 13:41:01.744101  900446 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 13:41:01.763106  900446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:41:01.780643  900446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:41:01.938286  900446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:41:02.101711  900446 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:41:02.101810  900446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:41:02.107522  900446 start.go:562] Will wait 60s for crictl version
	I0429 13:41:02.107593  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:02.112145  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:41:02.161874  900446 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:41:02.162037  900446 ssh_runner.go:195] Run: crio --version
	I0429 13:41:02.195600  900446 ssh_runner.go:195] Run: crio --version
	I0429 13:41:02.233713  900446 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 13:41:02.235230  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) Calling .GetIP
	I0429 13:41:02.238588  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:02.239069  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:18:0c", ip: ""} in network mk-kubernetes-upgrade-092103: {Iface:virbr1 ExpiryTime:2024-04-29 14:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:18:0c Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:kubernetes-upgrade-092103 Clientid:01:52:54:00:d6:18:0c}
	I0429 13:41:02.239113  900446 main.go:141] libmachine: (kubernetes-upgrade-092103) DBG | domain kubernetes-upgrade-092103 has defined IP address 192.168.50.154 and MAC address 52:54:00:d6:18:0c in network mk-kubernetes-upgrade-092103
	I0429 13:41:02.239515  900446 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 13:41:02.244781  900446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:41:02.261764  900446 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-092103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-092103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:41:02.261880  900446 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 13:41:02.261941  900446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:41:02.305009  900446 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 13:41:02.305096  900446 ssh_runner.go:195] Run: which lz4
	I0429 13:41:02.310265  900446 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 13:41:02.316234  900446 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 13:41:02.316276  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 13:41:04.451161  900446 crio.go:462] duration metric: took 2.140943137s to copy over tarball
	I0429 13:41:04.451282  900446 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 13:41:07.530694  900446 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.079369835s)
	I0429 13:41:07.530736  900446 crio.go:469] duration metric: took 3.079518024s to extract the tarball
	I0429 13:41:07.530748  900446 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 13:41:07.578292  900446 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:41:07.637011  900446 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 13:41:07.637044  900446 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 13:41:07.637128  900446 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 13:41:07.637152  900446 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 13:41:07.637173  900446 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 13:41:07.637119  900446 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:41:07.637224  900446 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 13:41:07.637224  900446 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 13:41:07.637126  900446 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 13:41:07.637153  900446 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 13:41:07.639059  900446 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 13:41:07.639114  900446 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 13:41:07.639121  900446 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 13:41:07.639059  900446 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 13:41:07.639059  900446 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 13:41:07.639059  900446 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 13:41:07.639059  900446 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 13:41:07.639059  900446 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:41:07.816335  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 13:41:07.818513  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 13:41:07.818521  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 13:41:07.827747  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 13:41:07.848954  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:41:07.853799  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 13:41:07.861359  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 13:41:07.898664  900446 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 13:41:07.948076  900446 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 13:41:07.948145  900446 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 13:41:07.948220  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.036740  900446 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 13:41:08.036771  900446 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 13:41:08.036813  900446 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 13:41:08.036813  900446 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 13:41:08.036866  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.036880  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.096727  900446 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 13:41:08.096787  900446 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 13:41:08.096858  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.214040  900446 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 13:41:08.214092  900446 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 13:41:08.214103  900446 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 13:41:08.214040  900446 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 13:41:08.214212  900446 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 13:41:08.214248  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 13:41:08.214265  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.214268  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 13:41:08.214150  900446 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 13:41:08.214324  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 13:41:08.214328  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.214173  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 13:41:08.214170  900446 ssh_runner.go:195] Run: which crictl
	I0429 13:41:08.343153  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 13:41:08.348122  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 13:41:08.348147  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 13:41:08.348221  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 13:41:08.348243  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 13:41:08.348332  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 13:41:08.348393  900446 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 13:41:08.406303  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 13:41:08.430597  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 13:41:08.445361  900446 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 13:41:08.445448  900446 cache_images.go:92] duration metric: took 808.38893ms to LoadCachedImages
	W0429 13:41:08.445535  900446 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0429 13:41:08.445556  900446 kubeadm.go:928] updating node { 192.168.50.154 8443 v1.20.0 crio true true} ...
	I0429 13:41:08.445691  900446 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-092103 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-092103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:41:08.445757  900446 ssh_runner.go:195] Run: crio config
	I0429 13:41:08.501277  900446 cni.go:84] Creating CNI manager for ""
	I0429 13:41:08.501309  900446 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:41:08.501324  900446 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:41:08.501355  900446 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-092103 NodeName:kubernetes-upgrade-092103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 13:41:08.501590  900446 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-092103"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:41:08.501682  900446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 13:41:08.513414  900446 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:41:08.513510  900446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:41:08.526323  900446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0429 13:41:08.551038  900446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:41:08.574388  900446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0429 13:41:08.600490  900446 ssh_runner.go:195] Run: grep 192.168.50.154	control-plane.minikube.internal$ /etc/hosts
	I0429 13:41:08.606642  900446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:41:08.626069  900446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:41:08.808355  900446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:41:08.832376  900446 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103 for IP: 192.168.50.154
	I0429 13:41:08.832409  900446 certs.go:194] generating shared ca certs ...
	I0429 13:41:08.832432  900446 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:08.832644  900446 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:41:08.832720  900446 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:41:08.832732  900446 certs.go:256] generating profile certs ...
	I0429 13:41:08.832810  900446 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/client.key
	I0429 13:41:08.832826  900446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/client.crt with IP's: []
	I0429 13:41:09.055184  900446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/client.crt ...
	I0429 13:41:09.055229  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/client.crt: {Name:mkb43fe3390c8e0cf2673cb1857d56428bc103ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:09.116250  900446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/client.key ...
	I0429 13:41:09.116299  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/client.key: {Name:mk388eb694f05d8abfdf02883d0da3fe15b23d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:09.116554  900446 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.key.6ed3120f
	I0429 13:41:09.116582  900446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.crt.6ed3120f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.154]
	I0429 13:41:09.305413  900446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.crt.6ed3120f ...
	I0429 13:41:09.305453  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.crt.6ed3120f: {Name:mk8b4648a986d0cb900183b8e5a53c047933e52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:09.305635  900446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.key.6ed3120f ...
	I0429 13:41:09.305650  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.key.6ed3120f: {Name:mk99e795177e57e8358289a76eb3dc21ac2c20e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:09.305745  900446 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.crt.6ed3120f -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.crt
	I0429 13:41:09.305858  900446 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.key.6ed3120f -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.key
	I0429 13:41:09.305950  900446 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.key
	I0429 13:41:09.305971  900446 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.crt with IP's: []
	I0429 13:41:09.669543  900446 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.crt ...
	I0429 13:41:09.669593  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.crt: {Name:mk5adf932dd2a767c297fefc97353cc463d9346f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:09.669823  900446 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.key ...
	I0429 13:41:09.669850  900446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.key: {Name:mk34fbe8b8f8c9463af29caa4d0eb6fdb4a50b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:41:09.670042  900446 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:41:09.670081  900446 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:41:09.670092  900446 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:41:09.670120  900446 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:41:09.670143  900446 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:41:09.670167  900446 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:41:09.670207  900446 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:41:09.670992  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:41:09.705683  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:41:09.735765  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:41:09.773885  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:41:09.808061  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 13:41:09.846986  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:41:09.887427  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:41:09.937615  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kubernetes-upgrade-092103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 13:41:09.977607  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:41:10.011802  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:41:10.046871  900446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:41:10.083630  900446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:41:10.108619  900446 ssh_runner.go:195] Run: openssl version
	I0429 13:41:10.115528  900446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:41:10.132708  900446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:41:10.139875  900446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:41:10.139970  900446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:41:10.149532  900446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:41:10.167528  900446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:41:10.187529  900446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:41:10.195713  900446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:41:10.195803  900446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:41:10.205407  900446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:41:10.218729  900446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:41:10.236486  900446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:41:10.242400  900446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:41:10.242485  900446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:41:10.251425  900446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:41:10.268715  900446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:41:10.274489  900446 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 13:41:10.274559  900446 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-092103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-092103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:41:10.274667  900446 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:41:10.274741  900446 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:41:10.328599  900446 cri.go:89] found id: ""
	I0429 13:41:10.328716  900446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:41:10.341341  900446 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:41:10.353592  900446 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:41:10.365608  900446 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:41:10.365651  900446 kubeadm.go:156] found existing configuration files:
	
	I0429 13:41:10.365728  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:41:10.376764  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:41:10.376861  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:41:10.388579  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:41:10.400508  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:41:10.400626  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:41:10.413705  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:41:10.425906  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:41:10.426005  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:41:10.441006  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:41:10.453104  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:41:10.453208  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:41:10.468488  900446 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:41:10.652445  900446 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 13:41:10.652603  900446 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:41:10.892874  900446 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:41:10.893077  900446 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:41:10.893217  900446 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:41:11.196511  900446 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:41:11.220699  900446 out.go:204]   - Generating certificates and keys ...
	I0429 13:41:11.220858  900446 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:41:11.221008  900446 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:41:11.664097  900446 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 13:41:12.029759  900446 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 13:41:12.126993  900446 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 13:41:12.215703  900446 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 13:41:12.329672  900446 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 13:41:12.329928  900446 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-092103 localhost] and IPs [192.168.50.154 127.0.0.1 ::1]
	I0429 13:41:12.436213  900446 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 13:41:12.436515  900446 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-092103 localhost] and IPs [192.168.50.154 127.0.0.1 ::1]
	I0429 13:41:13.058436  900446 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 13:41:13.319133  900446 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 13:41:13.918117  900446 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 13:41:13.918460  900446 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:41:14.034847  900446 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:41:14.212990  900446 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:41:14.341800  900446 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:41:14.394116  900446 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:41:14.437334  900446 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:41:14.439470  900446 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:41:14.439543  900446 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:41:14.619141  900446 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:41:14.621017  900446 out.go:204]   - Booting up control plane ...
	I0429 13:41:14.621168  900446 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:41:14.626751  900446 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:41:14.631313  900446 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:41:14.631471  900446 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:41:14.636052  900446 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 13:41:54.591783  900446 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 13:41:54.592404  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:41:54.592689  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:41:59.592660  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:41:59.592905  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:42:09.592395  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:42:09.592749  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:42:29.593057  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:42:29.593320  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:43:09.594647  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:43:09.594965  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:43:09.594998  900446 kubeadm.go:309] 
	I0429 13:43:09.595061  900446 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 13:43:09.595123  900446 kubeadm.go:309] 		timed out waiting for the condition
	I0429 13:43:09.595134  900446 kubeadm.go:309] 
	I0429 13:43:09.595186  900446 kubeadm.go:309] 	This error is likely caused by:
	I0429 13:43:09.595250  900446 kubeadm.go:309] 		- The kubelet is not running
	I0429 13:43:09.595460  900446 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 13:43:09.595485  900446 kubeadm.go:309] 
	I0429 13:43:09.595623  900446 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 13:43:09.595685  900446 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 13:43:09.595725  900446 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 13:43:09.595740  900446 kubeadm.go:309] 
	I0429 13:43:09.595908  900446 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 13:43:09.596025  900446 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 13:43:09.596037  900446 kubeadm.go:309] 
	I0429 13:43:09.596175  900446 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 13:43:09.596304  900446 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 13:43:09.596405  900446 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 13:43:09.596526  900446 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 13:43:09.596571  900446 kubeadm.go:309] 
	I0429 13:43:09.596728  900446 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:43:09.596860  900446 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 13:43:09.596970  900446 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0429 13:43:09.597138  900446 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-092103 localhost] and IPs [192.168.50.154 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-092103 localhost] and IPs [192.168.50.154 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-092103 localhost] and IPs [192.168.50.154 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-092103 localhost] and IPs [192.168.50.154 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 13:43:09.597206  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 13:43:12.122926  900446 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.52568106s)
	I0429 13:43:12.123032  900446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:43:12.142385  900446 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:43:12.154897  900446 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:43:12.154923  900446 kubeadm.go:156] found existing configuration files:
	
	I0429 13:43:12.154983  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:43:12.169935  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:43:12.170015  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:43:12.189475  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:43:12.203540  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:43:12.203599  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:43:12.216618  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:43:12.228664  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:43:12.228784  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:43:12.240300  900446 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:43:12.252554  900446 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:43:12.252665  900446 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:43:12.267855  900446 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:43:12.547205  900446 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:45:09.051445  900446 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 13:45:09.051535  900446 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 13:45:09.053646  900446 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 13:45:09.053847  900446 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:45:09.054002  900446 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:45:09.054160  900446 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:45:09.054306  900446 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:45:09.054400  900446 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:45:09.056828  900446 out.go:204]   - Generating certificates and keys ...
	I0429 13:45:09.056922  900446 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:45:09.057124  900446 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:45:09.057275  900446 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 13:45:09.057393  900446 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 13:45:09.057488  900446 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 13:45:09.057570  900446 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 13:45:09.057657  900446 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 13:45:09.057742  900446 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 13:45:09.057849  900446 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 13:45:09.057973  900446 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 13:45:09.058045  900446 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 13:45:09.058101  900446 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:45:09.058150  900446 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:45:09.058248  900446 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:45:09.058326  900446 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:45:09.058424  900446 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:45:09.058565  900446 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:45:09.058650  900446 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:45:09.058686  900446 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:45:09.058765  900446 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:45:09.061910  900446 out.go:204]   - Booting up control plane ...
	I0429 13:45:09.062028  900446 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:45:09.062106  900446 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:45:09.062195  900446 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:45:09.062285  900446 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:45:09.062439  900446 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 13:45:09.062490  900446 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 13:45:09.062550  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:45:09.062727  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:45:09.062793  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:45:09.062944  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:45:09.063013  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:45:09.063164  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:45:09.063227  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:45:09.063439  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:45:09.063542  900446 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:45:09.063786  900446 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:45:09.063802  900446 kubeadm.go:309] 
	I0429 13:45:09.063869  900446 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 13:45:09.063934  900446 kubeadm.go:309] 		timed out waiting for the condition
	I0429 13:45:09.063943  900446 kubeadm.go:309] 
	I0429 13:45:09.063996  900446 kubeadm.go:309] 	This error is likely caused by:
	I0429 13:45:09.064046  900446 kubeadm.go:309] 		- The kubelet is not running
	I0429 13:45:09.064199  900446 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 13:45:09.064218  900446 kubeadm.go:309] 
	I0429 13:45:09.064347  900446 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 13:45:09.064409  900446 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 13:45:09.064455  900446 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 13:45:09.064466  900446 kubeadm.go:309] 
	I0429 13:45:09.064614  900446 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 13:45:09.064708  900446 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 13:45:09.064719  900446 kubeadm.go:309] 
	I0429 13:45:09.064857  900446 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 13:45:09.064944  900446 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 13:45:09.065025  900446 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 13:45:09.065087  900446 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 13:45:09.065123  900446 kubeadm.go:309] 
	I0429 13:45:09.065240  900446 kubeadm.go:393] duration metric: took 3m58.790687732s to StartCluster
	I0429 13:45:09.065300  900446 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 13:45:09.065378  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 13:45:09.121901  900446 cri.go:89] found id: ""
	I0429 13:45:09.121940  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.121952  900446 logs.go:278] No container was found matching "kube-apiserver"
	I0429 13:45:09.121960  900446 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 13:45:09.122102  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 13:45:09.165421  900446 cri.go:89] found id: ""
	I0429 13:45:09.165462  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.165477  900446 logs.go:278] No container was found matching "etcd"
	I0429 13:45:09.165485  900446 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 13:45:09.165560  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 13:45:09.208091  900446 cri.go:89] found id: ""
	I0429 13:45:09.208130  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.208141  900446 logs.go:278] No container was found matching "coredns"
	I0429 13:45:09.208148  900446 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 13:45:09.208218  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 13:45:09.250454  900446 cri.go:89] found id: ""
	I0429 13:45:09.250491  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.250504  900446 logs.go:278] No container was found matching "kube-scheduler"
	I0429 13:45:09.250511  900446 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 13:45:09.250587  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 13:45:09.294891  900446 cri.go:89] found id: ""
	I0429 13:45:09.294958  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.294975  900446 logs.go:278] No container was found matching "kube-proxy"
	I0429 13:45:09.295091  900446 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 13:45:09.295228  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 13:45:09.340519  900446 cri.go:89] found id: ""
	I0429 13:45:09.340561  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.340576  900446 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 13:45:09.340599  900446 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 13:45:09.340681  900446 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 13:45:09.387455  900446 cri.go:89] found id: ""
	I0429 13:45:09.387501  900446 logs.go:276] 0 containers: []
	W0429 13:45:09.387515  900446 logs.go:278] No container was found matching "kindnet"
	I0429 13:45:09.387531  900446 logs.go:123] Gathering logs for CRI-O ...
	I0429 13:45:09.387551  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 13:45:09.492447  900446 logs.go:123] Gathering logs for container status ...
	I0429 13:45:09.492504  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 13:45:09.541257  900446 logs.go:123] Gathering logs for kubelet ...
	I0429 13:45:09.541294  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 13:45:09.594609  900446 logs.go:123] Gathering logs for dmesg ...
	I0429 13:45:09.594661  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 13:45:09.610851  900446 logs.go:123] Gathering logs for describe nodes ...
	I0429 13:45:09.610896  900446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 13:45:09.756109  900446 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0429 13:45:09.756161  900446 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 13:45:09.756215  900446 out.go:239] * 
	* 
	W0429 13:45:09.756277  900446 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 13:45:09.756300  900446 out.go:239] * 
	* 
	W0429 13:45:09.757287  900446 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 13:45:09.761309  900446 out.go:177] 
	W0429 13:45:09.763030  900446 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 13:45:09.763134  900446 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 13:45:09.763173  900446 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 13:45:09.765143  900446 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-092103
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-092103: (3.340198105s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-092103 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-092103 status --format={{.Host}}: exit status 7 (98.45186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.639634648s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-092103 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (115.254055ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-092103] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-092103
	    minikube start -p kubernetes-upgrade-092103 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0921032 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-092103 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-092103 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (16.790391987s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-29 13:46:10.888137036 +0000 UTC m=+6474.354939658
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-092103 -n kubernetes-upgrade-092103
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-092103 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-092103 logs -n 25: (1.355477817s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo journalctl                       | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| start   | -p calico-807154 --memory=3072                       | calico-807154         | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo docker                           | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo                                  | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo cat                              | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo containerd                       | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo systemctl                        | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo find                             | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-807154 sudo crio                             | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-807154                                       | auto-807154           | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC | 29 Apr 24 13:46 UTC |
	| start   | -p custom-flannel-807154                             | custom-flannel-807154 | jenkins | v1.33.0 | 29 Apr 24 13:46 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 13:46:11
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 13:46:11.258516  909776 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:46:11.259220  909776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:46:11.259239  909776 out.go:304] Setting ErrFile to fd 2...
	I0429 13:46:11.259247  909776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:46:11.259833  909776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:46:11.260967  909776 out.go:298] Setting JSON to false
	I0429 13:46:11.263045  909776 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":80916,"bootTime":1714317455,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:46:11.263222  909776 start.go:139] virtualization: kvm guest
	I0429 13:46:11.265707  909776 out.go:177] * [custom-flannel-807154] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:46:11.267767  909776 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:46:11.267802  909776 notify.go:220] Checking for updates...
	I0429 13:46:11.269347  909776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:46:11.270951  909776 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:46:11.272562  909776 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:46:11.274265  909776 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:46:11.275912  909776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.646529608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398371646493859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83ebbc7d-698d-4a11-bf92-01a6c9eed53c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.647488150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f02c7c05-3216-4730-87fa-cb1589ce28f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.647592375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f02c7c05-3216-4730-87fa-cb1589ce28f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.647892653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c11524b7ffbde26d0d2dad1557b88667e160ddbdfd5f1ff5abe2f100aea1b3df,PodSandboxId:a97b46fc983681e192eb01e69de01c40e3232b517ed6f58c35a91ba1e18acb1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398363822624742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21302fe792a15b7aa961a6150580b8fa9cb917ba2383ec3cfe152526de18b496,PodSandboxId:6e07bca80af6939ad5ecbd73172c11b106d70801bc1902774b584909a8719702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398363812582676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e1e234a7546708ed7e0586e6899a6de7225022b8be0374a6b56a0c02b68fab,PodSandboxId:b3acb0053548fcb926044292ac53431071d383348e2e4fb0619663cef36e02a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398363797633374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecaddbc74bcffe33f8e03780a4716ff13fa6053f290480484d87728f99d72e6,PodSandboxId:38e5d331ede169d9668cad7b960a2b60a650712c062937cdcccea0e6c85aed01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398363784089444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ebfe8be1b6b102c080ef986df0aa22b06cf067482e7adefd629b429fbef5f5,PodSandboxId:1f2e118464c3cbf3eb1365b5ce9aa2bc2d017ce4e059d8ee07f6c8e17ce82a5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714398356672917869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed0b7dc208aa7b3b919091537619bc0fac28cbfce75e7ae7e77766993c8d38c,PodSandboxId:4e8b1a5626b699cd202c1b8739471021af46f18d81ee4a018eb4fe24283a3d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714398356617546169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1516382648831f4ca91eaee977506c8b972095e3dea289010e2179eba7faaa,PodSandboxId:51c2571db4cc3d0889ca75fee0e25b988b6967f4c692a399b99030f8c1eecbfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714398356585118174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2b430d9f1bbfaeda5dcc802e06b7158cc7a9b6498a768ad0f64e6b6267c938,PodSandboxId:c2ba00d46992de49da7ae0e174473b11de012404deb2bee3fd5b9a69d3133c95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714398356486788077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f02c7c05-3216-4730-87fa-cb1589ce28f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.690514634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e192f276-a2c3-444d-924c-251c73715559 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.690597956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e192f276-a2c3-444d-924c-251c73715559 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.692544853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09ab47e6-453f-4a1c-89d5-dd973246d8b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.693045041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398371693012399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09ab47e6-453f-4a1c-89d5-dd973246d8b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.693849057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d782cbb-0222-4c73-a589-0f61000b985e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.693977114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d782cbb-0222-4c73-a589-0f61000b985e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.694221592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c11524b7ffbde26d0d2dad1557b88667e160ddbdfd5f1ff5abe2f100aea1b3df,PodSandboxId:a97b46fc983681e192eb01e69de01c40e3232b517ed6f58c35a91ba1e18acb1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398363822624742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21302fe792a15b7aa961a6150580b8fa9cb917ba2383ec3cfe152526de18b496,PodSandboxId:6e07bca80af6939ad5ecbd73172c11b106d70801bc1902774b584909a8719702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398363812582676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e1e234a7546708ed7e0586e6899a6de7225022b8be0374a6b56a0c02b68fab,PodSandboxId:b3acb0053548fcb926044292ac53431071d383348e2e4fb0619663cef36e02a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398363797633374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecaddbc74bcffe33f8e03780a4716ff13fa6053f290480484d87728f99d72e6,PodSandboxId:38e5d331ede169d9668cad7b960a2b60a650712c062937cdcccea0e6c85aed01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398363784089444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ebfe8be1b6b102c080ef986df0aa22b06cf067482e7adefd629b429fbef5f5,PodSandboxId:1f2e118464c3cbf3eb1365b5ce9aa2bc2d017ce4e059d8ee07f6c8e17ce82a5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714398356672917869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed0b7dc208aa7b3b919091537619bc0fac28cbfce75e7ae7e77766993c8d38c,PodSandboxId:4e8b1a5626b699cd202c1b8739471021af46f18d81ee4a018eb4fe24283a3d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714398356617546169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1516382648831f4ca91eaee977506c8b972095e3dea289010e2179eba7faaa,PodSandboxId:51c2571db4cc3d0889ca75fee0e25b988b6967f4c692a399b99030f8c1eecbfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714398356585118174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2b430d9f1bbfaeda5dcc802e06b7158cc7a9b6498a768ad0f64e6b6267c938,PodSandboxId:c2ba00d46992de49da7ae0e174473b11de012404deb2bee3fd5b9a69d3133c95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714398356486788077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d782cbb-0222-4c73-a589-0f61000b985e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.746047758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b73ae58-6894-4f8c-b70f-15c2fac4021b name=/runtime.v1.RuntimeService/Version
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.746131350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b73ae58-6894-4f8c-b70f-15c2fac4021b name=/runtime.v1.RuntimeService/Version
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.747502437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e562a65e-b486-4755-85ae-81787df4cda1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.747924058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398371747898798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e562a65e-b486-4755-85ae-81787df4cda1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.748814056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abdb0d05-b03e-450d-8ac7-00e5a3a82067 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.748896957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abdb0d05-b03e-450d-8ac7-00e5a3a82067 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.749092089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c11524b7ffbde26d0d2dad1557b88667e160ddbdfd5f1ff5abe2f100aea1b3df,PodSandboxId:a97b46fc983681e192eb01e69de01c40e3232b517ed6f58c35a91ba1e18acb1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398363822624742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21302fe792a15b7aa961a6150580b8fa9cb917ba2383ec3cfe152526de18b496,PodSandboxId:6e07bca80af6939ad5ecbd73172c11b106d70801bc1902774b584909a8719702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398363812582676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e1e234a7546708ed7e0586e6899a6de7225022b8be0374a6b56a0c02b68fab,PodSandboxId:b3acb0053548fcb926044292ac53431071d383348e2e4fb0619663cef36e02a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398363797633374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecaddbc74bcffe33f8e03780a4716ff13fa6053f290480484d87728f99d72e6,PodSandboxId:38e5d331ede169d9668cad7b960a2b60a650712c062937cdcccea0e6c85aed01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398363784089444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ebfe8be1b6b102c080ef986df0aa22b06cf067482e7adefd629b429fbef5f5,PodSandboxId:1f2e118464c3cbf3eb1365b5ce9aa2bc2d017ce4e059d8ee07f6c8e17ce82a5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714398356672917869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed0b7dc208aa7b3b919091537619bc0fac28cbfce75e7ae7e77766993c8d38c,PodSandboxId:4e8b1a5626b699cd202c1b8739471021af46f18d81ee4a018eb4fe24283a3d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714398356617546169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1516382648831f4ca91eaee977506c8b972095e3dea289010e2179eba7faaa,PodSandboxId:51c2571db4cc3d0889ca75fee0e25b988b6967f4c692a399b99030f8c1eecbfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714398356585118174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2b430d9f1bbfaeda5dcc802e06b7158cc7a9b6498a768ad0f64e6b6267c938,PodSandboxId:c2ba00d46992de49da7ae0e174473b11de012404deb2bee3fd5b9a69d3133c95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714398356486788077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abdb0d05-b03e-450d-8ac7-00e5a3a82067 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.787213823Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1ed7fde-52bc-4e95-bbed-cb78193042cc name=/runtime.v1.RuntimeService/Version
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.787357180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1ed7fde-52bc-4e95-bbed-cb78193042cc name=/runtime.v1.RuntimeService/Version
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.788816234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d491af28-7bd3-4bb1-9d14-00c4bd3ed574 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.789254080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398371789227683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d491af28-7bd3-4bb1-9d14-00c4bd3ed574 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.790412152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fbc460f-b219-4577-88d2-8513b0784270 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.790491591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fbc460f-b219-4577-88d2-8513b0784270 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:46:11 kubernetes-upgrade-092103 crio[1891]: time="2024-04-29 13:46:11.790809703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c11524b7ffbde26d0d2dad1557b88667e160ddbdfd5f1ff5abe2f100aea1b3df,PodSandboxId:a97b46fc983681e192eb01e69de01c40e3232b517ed6f58c35a91ba1e18acb1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398363822624742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21302fe792a15b7aa961a6150580b8fa9cb917ba2383ec3cfe152526de18b496,PodSandboxId:6e07bca80af6939ad5ecbd73172c11b106d70801bc1902774b584909a8719702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398363812582676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e1e234a7546708ed7e0586e6899a6de7225022b8be0374a6b56a0c02b68fab,PodSandboxId:b3acb0053548fcb926044292ac53431071d383348e2e4fb0619663cef36e02a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398363797633374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecaddbc74bcffe33f8e03780a4716ff13fa6053f290480484d87728f99d72e6,PodSandboxId:38e5d331ede169d9668cad7b960a2b60a650712c062937cdcccea0e6c85aed01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398363784089444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ebfe8be1b6b102c080ef986df0aa22b06cf067482e7adefd629b429fbef5f5,PodSandboxId:1f2e118464c3cbf3eb1365b5ce9aa2bc2d017ce4e059d8ee07f6c8e17ce82a5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714398356672917869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21e1bce6ef86024b3797941199d29dc,},Annotations:map[string]string{io.kubernetes.container.hash: e52c0405,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed0b7dc208aa7b3b919091537619bc0fac28cbfce75e7ae7e77766993c8d38c,PodSandboxId:4e8b1a5626b699cd202c1b8739471021af46f18d81ee4a018eb4fe24283a3d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714398356617546169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ee3fdd9bf56adcba57088f1f8dc314b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1516382648831f4ca91eaee977506c8b972095e3dea289010e2179eba7faaa,PodSandboxId:51c2571db4cc3d0889ca75fee0e25b988b6967f4c692a399b99030f8c1eecbfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714398356585118174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25368943259393ece235487c2cc821cf,},Annotations:map[string]string{io.kubernetes.container.hash: 17c5109,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2b430d9f1bbfaeda5dcc802e06b7158cc7a9b6498a768ad0f64e6b6267c938,PodSandboxId:c2ba00d46992de49da7ae0e174473b11de012404deb2bee3fd5b9a69d3133c95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714398356486788077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-092103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f384ca3470f7744fabefa465b695f3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fbc460f-b219-4577-88d2-8513b0784270 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c11524b7ffbde       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   8 seconds ago       Running             kube-scheduler            2                   a97b46fc98368       kube-scheduler-kubernetes-upgrade-092103
	21302fe792a15       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   8 seconds ago       Running             kube-apiserver            2                   6e07bca80af69       kube-apiserver-kubernetes-upgrade-092103
	01e1e234a7546       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   8 seconds ago       Running             kube-controller-manager   2                   b3acb0053548f       kube-controller-manager-kubernetes-upgrade-092103
	9ecaddbc74bcf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      2                   38e5d331ede16       etcd-kubernetes-upgrade-092103
	49ebfe8be1b6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 seconds ago      Exited              etcd                      1                   1f2e118464c3c       etcd-kubernetes-upgrade-092103
	eed0b7dc208aa       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   15 seconds ago      Exited              kube-controller-manager   1                   4e8b1a5626b69       kube-controller-manager-kubernetes-upgrade-092103
	5c15163826488       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   15 seconds ago      Exited              kube-apiserver            1                   51c2571db4cc3       kube-apiserver-kubernetes-upgrade-092103
	bf2b430d9f1bb       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 seconds ago      Exited              kube-scheduler            1                   c2ba00d46992d       kube-scheduler-kubernetes-upgrade-092103
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-092103
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-092103
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:45:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-092103
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:46:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:46:07 +0000   Mon, 29 Apr 2024 13:45:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:46:07 +0000   Mon, 29 Apr 2024 13:45:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:46:07 +0000   Mon, 29 Apr 2024 13:45:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:46:07 +0000   Mon, 29 Apr 2024 13:45:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.154
	  Hostname:    kubernetes-upgrade-092103
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 26d15be4df634bf6a36a9d8e3d9378a3
	  System UUID:                26d15be4-df63-4bf6-a36a-9d8e3d9378a3
	  Boot ID:                    406ccaf9-ec1e-4d1c-b987-b87eed2d3fd8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-092103                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-kubernetes-upgrade-092103             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-092103    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-scheduler-kubernetes-upgrade-092103             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 29s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet  Node kubernetes-upgrade-092103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet  Node kubernetes-upgrade-092103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet  Node kubernetes-upgrade-092103 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.731929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.422486] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.062255] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072546] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.248914] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.148441] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.370791] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +5.337781] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +0.070496] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.836112] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +9.789714] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	[  +0.108891] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.429976] systemd-fstab-generator[1809]: Ignoring "noauto" option for root device
	[  +0.207011] systemd-fstab-generator[1822]: Ignoring "noauto" option for root device
	[  +0.258324] systemd-fstab-generator[1838]: Ignoring "noauto" option for root device
	[  +0.119558] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.106634] systemd-fstab-generator[1850]: Ignoring "noauto" option for root device
	[  +0.467340] systemd-fstab-generator[1878]: Ignoring "noauto" option for root device
	[  +1.542556] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[Apr29 13:46] systemd-fstab-generator[2336]: Ignoring "noauto" option for root device
	[  +0.109588] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.472975] systemd-fstab-generator[2609]: Ignoring "noauto" option for root device
	[  +0.117471] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [49ebfe8be1b6b102c080ef986df0aa22b06cf067482e7adefd629b429fbef5f5] <==
	{"level":"info","ts":"2024-04-29T13:45:57.556459Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"62.817562ms"}
	{"level":"info","ts":"2024-04-29T13:45:57.587487Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-29T13:45:57.649072Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c959469ae2bd4434","local-member-id":"ebed6f3735951470","commit-index":275}
	{"level":"info","ts":"2024-04-29T13:45:57.649207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-29T13:45:57.649351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 became follower at term 2"}
	{"level":"info","ts":"2024-04-29T13:45:57.649387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ebed6f3735951470 [peers: [], term: 2, commit: 275, applied: 0, lastindex: 275, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-29T13:45:57.652666Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-29T13:45:57.685982Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":269}
	{"level":"info","ts":"2024-04-29T13:45:57.7165Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-29T13:45:57.7486Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ebed6f3735951470","timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:45:57.748835Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ebed6f3735951470"}
	{"level":"info","ts":"2024-04-29T13:45:57.748896Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"ebed6f3735951470","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-29T13:45:57.759827Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-29T13:45:57.765686Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:45:57.778834Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:45:57.779048Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ebed6f3735951470","initial-advertise-peer-urls":["https://192.168.50.154:2380"],"listen-peer-urls":["https://192.168.50.154:2380"],"advertise-client-urls":["https://192.168.50.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:45:57.779098Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:45:57.779169Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.154:2380"}
	{"level":"info","ts":"2024-04-29T13:45:57.779195Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.154:2380"}
	{"level":"info","ts":"2024-04-29T13:45:57.787586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 switched to configuration voters=(17000366451306337392)"}
	{"level":"info","ts":"2024-04-29T13:45:57.787681Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c959469ae2bd4434","local-member-id":"ebed6f3735951470","added-peer-id":"ebed6f3735951470","added-peer-peer-urls":["https://192.168.50.154:2380"]}
	{"level":"info","ts":"2024-04-29T13:45:57.787768Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c959469ae2bd4434","local-member-id":"ebed6f3735951470","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:45:57.787817Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:45:57.768928Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:45:57.789042Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	
	
	==> etcd [9ecaddbc74bcffe33f8e03780a4716ff13fa6053f290480484d87728f99d72e6] <==
	{"level":"info","ts":"2024-04-29T13:46:04.26233Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c959469ae2bd4434","local-member-id":"ebed6f3735951470","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:46:04.26262Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:46:04.27353Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:46:04.27601Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ebed6f3735951470","initial-advertise-peer-urls":["https://192.168.50.154:2380"],"listen-peer-urls":["https://192.168.50.154:2380"],"advertise-client-urls":["https://192.168.50.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:46:04.26789Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ebed6f3735951470","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-04-29T13:46:04.275551Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:46:04.275794Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.154:2380"}
	{"level":"info","ts":"2024-04-29T13:46:04.277372Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:46:04.279486Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:46:04.279522Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:46:04.279606Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.154:2380"}
	{"level":"info","ts":"2024-04-29T13:46:05.113387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T13:46:05.113548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T13:46:05.113615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 received MsgPreVoteResp from ebed6f3735951470 at term 2"}
	{"level":"info","ts":"2024-04-29T13:46:05.113669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T13:46:05.113704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 received MsgVoteResp from ebed6f3735951470 at term 3"}
	{"level":"info","ts":"2024-04-29T13:46:05.113741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ebed6f3735951470 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T13:46:05.113778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ebed6f3735951470 elected leader ebed6f3735951470 at term 3"}
	{"level":"info","ts":"2024-04-29T13:46:05.12408Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ebed6f3735951470","local-member-attributes":"{Name:kubernetes-upgrade-092103 ClientURLs:[https://192.168.50.154:2379]}","request-path":"/0/members/ebed6f3735951470/attributes","cluster-id":"c959469ae2bd4434","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:46:05.126378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:46:05.12727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:46:05.135391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T13:46:05.139623Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:46:05.142168Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:46:05.173873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.154:2379"}
	
	
	==> kernel <==
	 13:46:12 up 0 min,  0 users,  load average: 1.54, 0.43, 0.15
	Linux kubernetes-upgrade-092103 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21302fe792a15b7aa961a6150580b8fa9cb917ba2383ec3cfe152526de18b496] <==
	I0429 13:46:07.186096       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0429 13:46:07.186163       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 13:46:07.281095       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 13:46:07.281192       1 policy_source.go:224] refreshing policies
	I0429 13:46:07.288964       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 13:46:07.324617       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 13:46:07.324910       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 13:46:07.325030       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 13:46:07.325064       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 13:46:07.326197       1 aggregator.go:165] initial CRD sync complete...
	I0429 13:46:07.326241       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 13:46:07.326250       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 13:46:07.364246       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 13:46:07.379676       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0429 13:46:07.414075       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 13:46:07.429366       1 cache.go:39] Caches are synced for autoregister controller
	I0429 13:46:07.444636       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 13:46:07.446829       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 13:46:07.452336       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 13:46:08.149799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 13:46:09.124257       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 13:46:09.144115       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 13:46:09.194344       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 13:46:09.282176       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 13:46:09.302226       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [5c1516382648831f4ca91eaee977506c8b972095e3dea289010e2179eba7faaa] <==
	I0429 13:45:57.109072       1 options.go:221] external host was not specified, using 192.168.50.154
	I0429 13:45:57.114386       1 server.go:148] Version: v1.30.0
	I0429 13:45:57.114476       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [01e1e234a7546708ed7e0586e6899a6de7225022b8be0374a6b56a0c02b68fab] <==
	I0429 13:46:10.284431       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0429 13:46:10.284589       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0429 13:46:10.284607       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0429 13:46:10.378379       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0429 13:46:10.378453       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0429 13:46:10.378560       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0429 13:46:10.378571       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0429 13:46:10.533601       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0429 13:46:10.533655       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0429 13:46:10.533747       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 13:46:10.545716       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0429 13:46:10.545744       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0429 13:46:10.545780       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 13:46:10.549097       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0429 13:46:10.549126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0429 13:46:10.549154       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 13:46:10.550505       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0429 13:46:10.550593       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0429 13:46:10.550611       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0429 13:46:10.550635       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 13:46:10.680990       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0429 13:46:10.681230       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0429 13:46:10.681253       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0429 13:46:10.728638       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0429 13:46:10.728837       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-controller-manager [eed0b7dc208aa7b3b919091537619bc0fac28cbfce75e7ae7e77766993c8d38c] <==
	
	
	==> kube-scheduler [bf2b430d9f1bbfaeda5dcc802e06b7158cc7a9b6498a768ad0f64e6b6267c938] <==
	
	
	==> kube-scheduler [c11524b7ffbde26d0d2dad1557b88667e160ddbdfd5f1ff5abe2f100aea1b3df] <==
	I0429 13:46:05.833268       1 serving.go:380] Generated self-signed cert in-memory
	W0429 13:46:07.218882       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 13:46:07.218972       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:46:07.219000       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 13:46:07.219025       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 13:46:07.344521       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 13:46:07.344560       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:46:07.355818       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 13:46:07.356260       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 13:46:07.357593       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 13:46:07.358109       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 13:46:07.458227       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.544685    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25368943259393ece235487c2cc821cf-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-092103\" (UID: \"25368943259393ece235487c2cc821cf\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.544821    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25368943259393ece235487c2cc821cf-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-092103\" (UID: \"25368943259393ece235487c2cc821cf\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.544942    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ee3fdd9bf56adcba57088f1f8dc314b-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-092103\" (UID: \"0ee3fdd9bf56adcba57088f1f8dc314b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.545063    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ee3fdd9bf56adcba57088f1f8dc314b-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-092103\" (UID: \"0ee3fdd9bf56adcba57088f1f8dc314b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.545164    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ee3fdd9bf56adcba57088f1f8dc314b-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-092103\" (UID: \"0ee3fdd9bf56adcba57088f1f8dc314b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.545231    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ee3fdd9bf56adcba57088f1f8dc314b-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-092103\" (UID: \"0ee3fdd9bf56adcba57088f1f8dc314b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.545375    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b21e1bce6ef86024b3797941199d29dc-etcd-certs\") pod \"etcd-kubernetes-upgrade-092103\" (UID: \"b21e1bce6ef86024b3797941199d29dc\") " pod="kube-system/etcd-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.545446    2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25368943259393ece235487c2cc821cf-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-092103\" (UID: \"25368943259393ece235487c2cc821cf\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.554837    2343 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: E0429 13:46:03.556323    2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.154:8443: connect: connection refused" node="kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: E0429 13:46:03.612180    2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.154:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-092103.17cac43a6380c5fe  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-092103,UID:kubernetes-upgrade-092103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-092103,},FirstTimestamp:2024-04-29 13:46:03.224425982 +0000 UTC m=+0.170344429,LastTimestamp:2024-04-29 13:46:03.224425982 +0000 UTC m=+0.170344429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-09
2103,}"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.762278    2343 scope.go:117] "RemoveContainer" containerID="49ebfe8be1b6b102c080ef986df0aa22b06cf067482e7adefd629b429fbef5f5"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.763619    2343 scope.go:117] "RemoveContainer" containerID="5c1516382648831f4ca91eaee977506c8b972095e3dea289010e2179eba7faaa"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.764689    2343 scope.go:117] "RemoveContainer" containerID="eed0b7dc208aa7b3b919091537619bc0fac28cbfce75e7ae7e77766993c8d38c"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.766611    2343 scope.go:117] "RemoveContainer" containerID="bf2b430d9f1bbfaeda5dcc802e06b7158cc7a9b6498a768ad0f64e6b6267c938"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: E0429 13:46:03.847769    2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-092103?timeout=10s\": dial tcp 192.168.50.154:8443: connect: connection refused" interval="800ms"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:03.959253    2343 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-092103"
	Apr 29 13:46:03 kubernetes-upgrade-092103 kubelet[2343]: E0429 13:46:03.960769    2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.154:8443: connect: connection refused" node="kubernetes-upgrade-092103"
	Apr 29 13:46:04 kubernetes-upgrade-092103 kubelet[2343]: W0429 13:46:04.192455    2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.154:8443: connect: connection refused
	Apr 29 13:46:04 kubernetes-upgrade-092103 kubelet[2343]: E0429 13:46:04.192549    2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.154:8443: connect: connection refused
	Apr 29 13:46:04 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:04.762740    2343 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-092103"
	Apr 29 13:46:07 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:07.222212    2343 apiserver.go:52] "Watching apiserver"
	Apr 29 13:46:07 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:07.342965    2343 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 13:46:07 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:07.397645    2343 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-092103"
	Apr 29 13:46:07 kubernetes-upgrade-092103 kubelet[2343]: I0429 13:46:07.397764    2343 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-092103"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-092103 -n kubernetes-upgrade-092103
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-092103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-kubernetes-upgrade-092103 kube-apiserver-kubernetes-upgrade-092103 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-092103 describe pod etcd-kubernetes-upgrade-092103 kube-apiserver-kubernetes-upgrade-092103 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-092103 describe pod etcd-kubernetes-upgrade-092103 kube-apiserver-kubernetes-upgrade-092103 storage-provisioner: exit status 1 (77.880842ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-kubernetes-upgrade-092103" not found
	Error from server (NotFound): pods "kube-apiserver-kubernetes-upgrade-092103" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-092103 describe pod etcd-kubernetes-upgrade-092103 kube-apiserver-kubernetes-upgrade-092103 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-092103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-092103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-092103: (1.243347047s)
--- FAIL: TestKubernetesUpgrade (344.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (438.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-553639 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-553639 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (7m13.473358319s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-553639] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-553639" primary control-plane node in "pause-553639" cluster
	* Updating the running kvm2 "pause-553639" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-553639" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:44:33.058540  905474 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:44:33.059097  905474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:44:33.059116  905474 out.go:304] Setting ErrFile to fd 2...
	I0429 13:44:33.059124  905474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:44:33.059684  905474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:44:33.060633  905474 out.go:298] Setting JSON to false
	I0429 13:44:33.062097  905474 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":80818,"bootTime":1714317455,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:44:33.062250  905474 start.go:139] virtualization: kvm guest
	I0429 13:44:33.066158  905474 out.go:177] * [pause-553639] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:44:33.067825  905474 notify.go:220] Checking for updates...
	I0429 13:44:33.067839  905474 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:44:33.069450  905474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:44:33.070972  905474 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:44:33.072368  905474 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:44:33.073906  905474 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:44:33.075723  905474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:44:33.078087  905474 config.go:182] Loaded profile config "pause-553639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:44:33.078790  905474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:44:33.078882  905474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:44:33.098216  905474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0429 13:44:33.098753  905474 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:44:33.099547  905474 main.go:141] libmachine: Using API Version  1
	I0429 13:44:33.099574  905474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:44:33.100242  905474 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:44:33.100760  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:33.101126  905474 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:44:33.101659  905474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:44:33.101717  905474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:44:33.120087  905474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0429 13:44:33.120733  905474 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:44:33.121340  905474 main.go:141] libmachine: Using API Version  1
	I0429 13:44:33.121367  905474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:44:33.121874  905474 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:44:33.122111  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:33.168640  905474 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 13:44:33.170133  905474 start.go:297] selected driver: kvm2
	I0429 13:44:33.170152  905474 start.go:901] validating driver "kvm2" against &{Name:pause-553639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-553639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:44:33.170325  905474 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:44:33.170748  905474 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:44:33.170852  905474 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 13:44:33.189346  905474 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 13:44:33.190557  905474 cni.go:84] Creating CNI manager for ""
	I0429 13:44:33.190584  905474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:44:33.190673  905474 start.go:340] cluster config:
	{Name:pause-553639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-553639 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:44:33.190888  905474 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:44:33.193315  905474 out.go:177] * Starting "pause-553639" primary control-plane node in "pause-553639" cluster
	I0429 13:44:33.195050  905474 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:44:33.195118  905474 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 13:44:33.195131  905474 cache.go:56] Caching tarball of preloaded images
	I0429 13:44:33.195274  905474 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 13:44:33.195291  905474 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 13:44:33.195520  905474 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/config.json ...
	I0429 13:44:33.195806  905474 start.go:360] acquireMachinesLock for pause-553639: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:44:46.769271  905474 start.go:364] duration metric: took 13.573424671s to acquireMachinesLock for "pause-553639"
	I0429 13:44:46.769342  905474 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:44:46.769352  905474 fix.go:54] fixHost starting: 
	I0429 13:44:46.769817  905474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:44:46.769921  905474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:44:46.790249  905474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0429 13:44:46.790895  905474 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:44:46.791517  905474 main.go:141] libmachine: Using API Version  1
	I0429 13:44:46.791542  905474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:44:46.791952  905474 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:44:46.792492  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:46.792912  905474 main.go:141] libmachine: (pause-553639) Calling .GetState
	I0429 13:44:46.795698  905474 fix.go:112] recreateIfNeeded on pause-553639: state=Running err=<nil>
	W0429 13:44:46.795734  905474 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:44:46.798584  905474 out.go:177] * Updating the running kvm2 "pause-553639" VM ...
	I0429 13:44:46.800369  905474 machine.go:94] provisionDockerMachine start ...
	I0429 13:44:46.800419  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:46.800814  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:46.804837  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:46.805512  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:46.805598  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:46.805823  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:46.806111  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:46.806333  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:46.806473  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:46.806683  905474 main.go:141] libmachine: Using SSH client type: native
	I0429 13:44:46.806930  905474 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.170 22 <nil> <nil>}
	I0429 13:44:46.806944  905474 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 13:44:46.926325  905474 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-553639
	
	I0429 13:44:46.926375  905474 main.go:141] libmachine: (pause-553639) Calling .GetMachineName
	I0429 13:44:46.926706  905474 buildroot.go:166] provisioning hostname "pause-553639"
	I0429 13:44:46.926751  905474 main.go:141] libmachine: (pause-553639) Calling .GetMachineName
	I0429 13:44:46.927035  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:46.930713  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:46.931203  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:46.931237  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:46.931460  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:46.931726  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:46.931963  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:46.932173  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:46.932362  905474 main.go:141] libmachine: Using SSH client type: native
	I0429 13:44:46.932606  905474 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.170 22 <nil> <nil>}
	I0429 13:44:46.932626  905474 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-553639 && echo "pause-553639" | sudo tee /etc/hostname
	I0429 13:44:47.069063  905474 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-553639
	
	I0429 13:44:47.069104  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:47.072265  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.072776  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:47.072828  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.073173  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:47.073472  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:47.073659  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:47.073828  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:47.074080  905474 main.go:141] libmachine: Using SSH client type: native
	I0429 13:44:47.074355  905474 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.170 22 <nil> <nil>}
	I0429 13:44:47.074384  905474 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-553639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-553639/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-553639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:44:47.189398  905474 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:44:47.189432  905474 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:44:47.189462  905474 buildroot.go:174] setting up certificates
	I0429 13:44:47.189483  905474 provision.go:84] configureAuth start
	I0429 13:44:47.189492  905474 main.go:141] libmachine: (pause-553639) Calling .GetMachineName
	I0429 13:44:47.189808  905474 main.go:141] libmachine: (pause-553639) Calling .GetIP
	I0429 13:44:47.193104  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.193609  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:47.193648  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.193893  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:47.197195  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.197671  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:47.197696  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.197963  905474 provision.go:143] copyHostCerts
	I0429 13:44:47.198039  905474 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:44:47.198054  905474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:44:47.198136  905474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:44:47.198237  905474 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:44:47.198246  905474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:44:47.198280  905474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:44:47.198344  905474 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:44:47.198351  905474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:44:47.198369  905474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:44:47.198418  905474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.pause-553639 san=[127.0.0.1 192.168.61.170 localhost minikube pause-553639]
	I0429 13:44:47.376225  905474 provision.go:177] copyRemoteCerts
	I0429 13:44:47.376316  905474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:44:47.376350  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:47.379566  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.380056  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:47.380091  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.380283  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:47.380528  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:47.380718  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:47.380941  905474 sshutil.go:53] new ssh client: &{IP:192.168.61.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/pause-553639/id_rsa Username:docker}
	I0429 13:44:47.471074  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:44:47.506217  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 13:44:47.542942  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:44:47.576984  905474 provision.go:87] duration metric: took 387.487108ms to configureAuth
	I0429 13:44:47.577021  905474 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:44:47.577267  905474 config.go:182] Loaded profile config "pause-553639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:44:47.577362  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:47.580850  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.581503  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:47.581544  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:47.581822  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:47.582043  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:47.582248  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:47.582396  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:47.582631  905474 main.go:141] libmachine: Using SSH client type: native
	I0429 13:44:47.582908  905474 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.170 22 <nil> <nil>}
	I0429 13:44:47.582937  905474 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:44:54.794299  905474 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:44:54.794332  905474 machine.go:97] duration metric: took 7.993936708s to provisionDockerMachine
	I0429 13:44:54.794346  905474 start.go:293] postStartSetup for "pause-553639" (driver="kvm2")
	I0429 13:44:54.794379  905474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:44:54.794397  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:54.794928  905474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:44:54.794989  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:54.798698  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:54.799265  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:54.799291  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:54.799508  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:54.799780  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:54.800027  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:54.800224  905474 sshutil.go:53] new ssh client: &{IP:192.168.61.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/pause-553639/id_rsa Username:docker}
	I0429 13:44:54.894115  905474 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:44:54.900876  905474 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:44:54.900919  905474 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:44:54.901027  905474 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:44:54.901153  905474 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:44:54.901305  905474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:44:54.919309  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:44:54.958059  905474 start.go:296] duration metric: took 163.693252ms for postStartSetup
	I0429 13:44:54.958129  905474 fix.go:56] duration metric: took 8.188775836s for fixHost
	I0429 13:44:54.958161  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:54.962829  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:54.963474  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:54.963521  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:54.963702  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:54.963995  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:54.964223  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:54.964445  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:54.964715  905474 main.go:141] libmachine: Using SSH client type: native
	I0429 13:44:54.965125  905474 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.170 22 <nil> <nil>}
	I0429 13:44:54.965153  905474 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 13:44:55.106277  905474 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714398295.101610996
	
	I0429 13:44:55.106307  905474 fix.go:216] guest clock: 1714398295.101610996
	I0429 13:44:55.106318  905474 fix.go:229] Guest: 2024-04-29 13:44:55.101610996 +0000 UTC Remote: 2024-04-29 13:44:54.958135228 +0000 UTC m=+21.967582826 (delta=143.475768ms)
	I0429 13:44:55.106375  905474 fix.go:200] guest clock delta is within tolerance: 143.475768ms
	I0429 13:44:55.106383  905474 start.go:83] releasing machines lock for "pause-553639", held for 8.337065778s
	I0429 13:44:55.106411  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:55.106725  905474 main.go:141] libmachine: (pause-553639) Calling .GetIP
	I0429 13:44:55.110969  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:55.113543  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:55.113690  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:55.113705  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:55.114622  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:55.114890  905474 main.go:141] libmachine: (pause-553639) Calling .DriverName
	I0429 13:44:55.115009  905474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:44:55.115078  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:55.115177  905474 ssh_runner.go:195] Run: cat /version.json
	I0429 13:44:55.115192  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHHostname
	I0429 13:44:55.121353  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:55.121538  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:55.121734  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:55.121765  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:55.121966  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:44:55.122008  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:44:55.122063  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:55.122349  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHPort
	I0429 13:44:55.122389  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:55.122564  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:55.122570  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHKeyPath
	I0429 13:44:55.122854  905474 main.go:141] libmachine: (pause-553639) Calling .GetSSHUsername
	I0429 13:44:55.122876  905474 sshutil.go:53] new ssh client: &{IP:192.168.61.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/pause-553639/id_rsa Username:docker}
	I0429 13:44:55.123108  905474 sshutil.go:53] new ssh client: &{IP:192.168.61.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/pause-553639/id_rsa Username:docker}
	I0429 13:44:55.243808  905474 ssh_runner.go:195] Run: systemctl --version
	I0429 13:44:55.303157  905474 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:44:56.070341  905474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:44:56.098759  905474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:44:56.098876  905474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:44:56.215989  905474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 13:44:56.216031  905474 start.go:494] detecting cgroup driver to use...
	I0429 13:44:56.216126  905474 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:44:56.330716  905474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:44:56.370298  905474 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:44:56.370387  905474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:44:56.420110  905474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:44:56.510793  905474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:44:56.763534  905474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:44:56.994412  905474 docker.go:233] disabling docker service ...
	I0429 13:44:56.994514  905474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:44:57.042879  905474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:44:57.072744  905474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:44:57.297846  905474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:44:57.530197  905474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:44:57.548619  905474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:44:57.583437  905474 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:44:57.583522  905474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.599916  905474 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:44:57.600016  905474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.616594  905474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.633934  905474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.651906  905474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:44:57.673040  905474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.690794  905474 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.706554  905474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:44:57.723960  905474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:44:57.738902  905474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:44:57.754920  905474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:44:57.940918  905474 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:46:28.473162  905474 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.532179072s)
	I0429 13:46:28.473208  905474 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:46:28.473279  905474 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:46:28.480538  905474 start.go:562] Will wait 60s for crictl version
	I0429 13:46:28.480624  905474 ssh_runner.go:195] Run: which crictl
	I0429 13:46:28.486312  905474 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:46:28.545327  905474 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:46:28.545432  905474 ssh_runner.go:195] Run: crio --version
	I0429 13:46:28.586343  905474 ssh_runner.go:195] Run: crio --version
	I0429 13:46:28.630599  905474 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:46:28.632497  905474 main.go:141] libmachine: (pause-553639) Calling .GetIP
	I0429 13:46:28.636672  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:46:28.637164  905474 main.go:141] libmachine: (pause-553639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8d:6d", ip: ""} in network mk-pause-553639: {Iface:virbr3 ExpiryTime:2024-04-29 14:43:46 +0000 UTC Type:0 Mac:52:54:00:c2:8d:6d Iaid: IPaddr:192.168.61.170 Prefix:24 Hostname:pause-553639 Clientid:01:52:54:00:c2:8d:6d}
	I0429 13:46:28.637192  905474 main.go:141] libmachine: (pause-553639) DBG | domain pause-553639 has defined IP address 192.168.61.170 and MAC address 52:54:00:c2:8d:6d in network mk-pause-553639
	I0429 13:46:28.637483  905474 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 13:46:28.643234  905474 kubeadm.go:877] updating cluster {Name:pause-553639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-553639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:46:28.643521  905474 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:46:28.643608  905474 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:46:28.695179  905474 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:46:28.695223  905474 crio.go:433] Images already preloaded, skipping extraction
	I0429 13:46:28.695304  905474 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:46:28.744520  905474 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:46:28.744550  905474 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:46:28.744560  905474 kubeadm.go:928] updating node { 192.168.61.170 8443 v1.30.0 crio true true} ...
	I0429 13:46:28.744708  905474 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-553639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-553639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:46:28.744820  905474 ssh_runner.go:195] Run: crio config
	I0429 13:46:28.806485  905474 cni.go:84] Creating CNI manager for ""
	I0429 13:46:28.806524  905474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:46:28.806543  905474 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:46:28.806576  905474 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.170 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-553639 NodeName:pause-553639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:46:28.806811  905474 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-553639"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:46:28.806899  905474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:46:28.820293  905474 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:46:28.820379  905474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:46:28.834251  905474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 13:46:28.858466  905474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:46:28.883476  905474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0429 13:46:28.910855  905474 ssh_runner.go:195] Run: grep 192.168.61.170	control-plane.minikube.internal$ /etc/hosts
	I0429 13:46:28.916573  905474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:46:29.102016  905474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:46:29.122285  905474 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639 for IP: 192.168.61.170
	I0429 13:46:29.122320  905474 certs.go:194] generating shared ca certs ...
	I0429 13:46:29.122342  905474 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:46:29.122547  905474 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:46:29.122615  905474 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:46:29.122630  905474 certs.go:256] generating profile certs ...
	I0429 13:46:29.122761  905474 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/client.key
	I0429 13:46:29.122850  905474 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/apiserver.key.8252bb55
	I0429 13:46:29.122922  905474 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/proxy-client.key
	I0429 13:46:29.123069  905474 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:46:29.123108  905474 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:46:29.123118  905474 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:46:29.123150  905474 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:46:29.123177  905474 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:46:29.123204  905474 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:46:29.123262  905474 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:46:29.124051  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:46:29.155213  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:46:29.188466  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:46:29.220955  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:46:29.262840  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 13:46:29.294909  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 13:46:29.326113  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:46:29.363352  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/pause-553639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 13:46:29.392357  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:46:29.427586  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:46:29.462186  905474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:46:29.497278  905474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:46:29.525929  905474 ssh_runner.go:195] Run: openssl version
	I0429 13:46:29.545025  905474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:46:29.568025  905474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:46:29.592972  905474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:46:29.593074  905474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:46:29.651149  905474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:46:29.683187  905474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:46:29.732098  905474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:46:29.788198  905474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:46:29.788403  905474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:46:29.842885  905474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:46:29.907479  905474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:46:30.034694  905474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:46:30.101880  905474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:46:30.101973  905474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:46:30.168381  905474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:46:30.332684  905474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:46:30.369869  905474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 13:46:30.408599  905474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 13:46:30.497338  905474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 13:46:30.532676  905474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 13:46:30.577290  905474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 13:46:30.620711  905474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 13:46:30.675673  905474 kubeadm.go:391] StartCluster: {Name:pause-553639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-553639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:46:30.675861  905474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:46:30.675948  905474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:46:30.951165  905474 cri.go:89] found id: "abce25d933b0a94bb0d7188b17dd3ba20d884722a57bc37935b5872d741501b7"
	I0429 13:46:30.951200  905474 cri.go:89] found id: "55ad25acdbc8eefc9241c7ccdd1241413f99b9ee5b7569ad79cb653b4fe6b204"
	I0429 13:46:30.951207  905474 cri.go:89] found id: "7df1b67af5d12cec8abaea6bd540c32a894c33c4dadc18f66d009068a0d893f2"
	I0429 13:46:30.951212  905474 cri.go:89] found id: "b229d8351c123118d84f232e5bceac85c97ccc7a26651a598ad66c66d8303afb"
	I0429 13:46:30.951216  905474 cri.go:89] found id: "27dd137aa5be2df43e1e87f6b5d09f477c441f43562bb06ae2584cd232a280a3"
	I0429 13:46:30.951221  905474 cri.go:89] found id: "0f15912a741e97e81a6ba497734f2ed1c4f5830dce7072af7b13125633ebc119"
	I0429 13:46:30.951225  905474 cri.go:89] found id: "c00326cd8416ad99155dbf47788c85ecf8dfa15176bedef24c0114d9889f4174"
	I0429 13:46:30.951229  905474 cri.go:89] found id: "c33acfde233b8327e53f2a82471cebd53f6cf9d0b11354c74d58138af363817d"
	I0429 13:46:30.951233  905474 cri.go:89] found id: "89cde9de422d75a5b7e75b88d2fe99235991d5dd83a827c10434fe24ccee6b5c"
	I0429 13:46:30.951243  905474 cri.go:89] found id: "fd4d72cda23aa8b0a92b7cc071125740d554a2c4ab62006a6c8638dd804fcac7"
	I0429 13:46:30.951247  905474 cri.go:89] found id: "3c1b1c8c640c33ee1ca7550fc41db3d265721b050ae35ee75662ac71daba5631"
	I0429 13:46:30.951252  905474 cri.go:89] found id: "fa3aa96ef495075fe92df8c5d2ac4cfc6efba0c4f2b3625565f437453ff9acf6"
	I0429 13:46:30.951256  905474 cri.go:89] found id: "374467a936a12356e5f2a76855634ae89daa1a8ca86a1765f90fc1077a95d8a9"
	I0429 13:46:30.951260  905474 cri.go:89] found id: "b22205c65dce0d066de88ad3d0253dc415552678fa85be5a304533279c95f5cb"
	I0429 13:46:30.951266  905474 cri.go:89] found id: "9dfd88869cce2885f84495e0e27ffecf731462dbdb2325fed5a291de733621bc"
	I0429 13:46:30.951276  905474 cri.go:89] found id: ""
	I0429 13:46:30.951399  905474 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-553639 -n pause-553639
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-553639 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-553639 logs -n 25: (1.491530559s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-807154 sudo cat                           | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | systemctl status cri-docker                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo cat                           | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/containerd/config.toml                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat cri-docker                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo                               | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | containerd config dump                               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo                               | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl status crio --all                          |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo                               | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat crio --no-pager                        |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | cri-dockerd --version                                |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo find                          | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                    |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | systemctl status containerd                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo crio                          | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | config                                               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat containerd                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| delete  | -p flannel-807154                                    | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/containerd/config.toml                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | containerd config dump                               |                    |         |         |                     |                     |
	| start   | -p no-preload-301942                                 | no-preload-301942  | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | --memory=2200                                        |                    |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                    |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                    |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                    |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl status crio --all                          |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat crio --no-pager                        |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo find                           | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                    |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo crio                           | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | config                                               |                    |         |         |                     |                     |
	| delete  | -p bridge-807154                                     | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	| start   | -p embed-certs-954581                                | embed-certs-954581 | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | --memory=2200                                        |                    |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                    |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                          |                    |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                    |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                    |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 13:50:17
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 13:50:17.389789  919444 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:50:17.390118  919444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:50:17.390130  919444 out.go:304] Setting ErrFile to fd 2...
	I0429 13:50:17.390134  919444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:50:17.390322  919444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:50:17.390998  919444 out.go:298] Setting JSON to false
	I0429 13:50:17.392501  919444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":81162,"bootTime":1714317455,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:50:17.392587  919444 start.go:139] virtualization: kvm guest
	I0429 13:50:17.395252  919444 out.go:177] * [embed-certs-954581] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:50:17.397116  919444 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:50:17.397167  919444 notify.go:220] Checking for updates...
	I0429 13:50:17.398441  919444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:50:17.399943  919444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:50:17.401511  919444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:17.402901  919444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:50:17.404310  919444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:50:17.406105  919444 config.go:182] Loaded profile config "no-preload-301942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:50:17.406221  919444 config.go:182] Loaded profile config "old-k8s-version-856849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 13:50:17.406347  919444 config.go:182] Loaded profile config "pause-553639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:50:17.406497  919444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:50:17.451856  919444 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 13:50:17.453497  919444 start.go:297] selected driver: kvm2
	I0429 13:50:17.453523  919444 start.go:901] validating driver "kvm2" against <nil>
	I0429 13:50:17.453543  919444 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:50:17.454659  919444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:17.454786  919444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 13:50:17.474193  919444 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 13:50:17.474275  919444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 13:50:17.474511  919444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:50:17.474586  919444 cni.go:84] Creating CNI manager for ""
	I0429 13:50:17.474603  919444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:50:17.474614  919444 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 13:50:17.474747  919444 start.go:340] cluster config:
	{Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:50:17.474937  919444 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:17.477339  919444 out.go:177] * Starting "embed-certs-954581" primary control-plane node in "embed-certs-954581" cluster
	I0429 13:50:14.094483  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:16.097414  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:14.395479  919134 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 13:50:14.392865  919134 cache.go:107] acquiring lock: {Name:mk9033ac27572f6bdd2f91b1761afa042faa357b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:14.393309  919134 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 13:50:14.395659  919134 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:14.395664  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:50:14.392805  919134 cache.go:107] acquiring lock: {Name:mk227d98603ff8c1cd8ccb99c0467815135844cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:14.395707  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:50:14.393010  919134 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:14.394826  919134 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:14.394868  919134 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:14.394975  919134 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:14.395764  919134 cache.go:115] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0429 13:50:14.395997  919134 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.199971ms
	I0429 13:50:14.396020  919134 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0429 13:50:14.394830  919134 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:14.396807  919134 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:14.397009  919134 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:14.397086  919134 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 13:50:14.415706  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I0429 13:50:14.416313  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:50:14.416887  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:50:14.416918  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:50:14.417306  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:50:14.417540  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:14.417705  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:14.417887  919134 start.go:159] libmachine.API.Create for "no-preload-301942" (driver="kvm2")
	I0429 13:50:14.417918  919134 client.go:168] LocalClient.Create starting
	I0429 13:50:14.417957  919134 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 13:50:14.418002  919134 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:14.418018  919134 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:14.418075  919134 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 13:50:14.418093  919134 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:14.418104  919134 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:14.418125  919134 main.go:141] libmachine: Running pre-create checks...
	I0429 13:50:14.418135  919134 main.go:141] libmachine: (no-preload-301942) Calling .PreCreateCheck
	I0429 13:50:14.418504  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetConfigRaw
	I0429 13:50:14.419085  919134 main.go:141] libmachine: Creating machine...
	I0429 13:50:14.419101  919134 main.go:141] libmachine: (no-preload-301942) Calling .Create
	I0429 13:50:14.419211  919134 main.go:141] libmachine: (no-preload-301942) Creating KVM machine...
	I0429 13:50:14.420712  919134 main.go:141] libmachine: (no-preload-301942) DBG | found existing default KVM network
	I0429 13:50:14.422099  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.421928  919169 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c3:05:e2} reservation:<nil>}
	I0429 13:50:14.423249  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.423129  919169 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9b:dc:77} reservation:<nil>}
	I0429 13:50:14.424169  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.424100  919169 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:cf:a1:bd} reservation:<nil>}
	I0429 13:50:14.425375  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.425301  919169 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000309200}
	I0429 13:50:14.425431  919134 main.go:141] libmachine: (no-preload-301942) DBG | created network xml: 
	I0429 13:50:14.425470  919134 main.go:141] libmachine: (no-preload-301942) DBG | <network>
	I0429 13:50:14.425485  919134 main.go:141] libmachine: (no-preload-301942) DBG |   <name>mk-no-preload-301942</name>
	I0429 13:50:14.425493  919134 main.go:141] libmachine: (no-preload-301942) DBG |   <dns enable='no'/>
	I0429 13:50:14.425505  919134 main.go:141] libmachine: (no-preload-301942) DBG |   
	I0429 13:50:14.425517  919134 main.go:141] libmachine: (no-preload-301942) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 13:50:14.425530  919134 main.go:141] libmachine: (no-preload-301942) DBG |     <dhcp>
	I0429 13:50:14.425539  919134 main.go:141] libmachine: (no-preload-301942) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 13:50:14.425553  919134 main.go:141] libmachine: (no-preload-301942) DBG |     </dhcp>
	I0429 13:50:14.425563  919134 main.go:141] libmachine: (no-preload-301942) DBG |   </ip>
	I0429 13:50:14.425571  919134 main.go:141] libmachine: (no-preload-301942) DBG |   
	I0429 13:50:14.425589  919134 main.go:141] libmachine: (no-preload-301942) DBG | </network>
	I0429 13:50:14.425632  919134 main.go:141] libmachine: (no-preload-301942) DBG | 
	I0429 13:50:14.432239  919134 main.go:141] libmachine: (no-preload-301942) DBG | trying to create private KVM network mk-no-preload-301942 192.168.72.0/24...
	I0429 13:50:14.541454  919134 main.go:141] libmachine: (no-preload-301942) DBG | private KVM network mk-no-preload-301942 192.168.72.0/24 created
	I0429 13:50:14.541495  919134 main.go:141] libmachine: (no-preload-301942) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942 ...
	I0429 13:50:14.541521  919134 main.go:141] libmachine: (no-preload-301942) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 13:50:14.541582  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.541455  919169 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:14.541635  919134 main.go:141] libmachine: (no-preload-301942) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 13:50:14.567653  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0429 13:50:14.567670  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 13:50:14.569332  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 13:50:14.593701  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 13:50:14.600179  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 13:50:14.603724  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 13:50:14.631182  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0429 13:50:14.631214  919134 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 238.360676ms
	I0429 13:50:14.631229  919134 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0429 13:50:14.661666  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 13:50:14.846909  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.846745  919169 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa...
	I0429 13:50:15.033421  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0429 13:50:15.033456  919134 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0" took 640.514669ms
	I0429 13:50:15.033472  919134 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0429 13:50:15.095063  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:15.094934  919169 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/no-preload-301942.rawdisk...
	I0429 13:50:15.095095  919134 main.go:141] libmachine: (no-preload-301942) DBG | Writing magic tar header
	I0429 13:50:15.095131  919134 main.go:141] libmachine: (no-preload-301942) DBG | Writing SSH key tar header
	I0429 13:50:15.095197  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:15.095148  919169 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942 ...
	I0429 13:50:15.095395  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942
	I0429 13:50:15.095428  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 13:50:15.095442  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942 (perms=drwx------)
	I0429 13:50:15.095464  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 13:50:15.095479  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 13:50:15.095495  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 13:50:15.095510  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:15.095528  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 13:50:15.095539  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 13:50:15.095549  919134 main.go:141] libmachine: (no-preload-301942) Creating domain...
	I0429 13:50:15.095563  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 13:50:15.095576  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 13:50:15.095592  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins
	I0429 13:50:15.095606  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home
	I0429 13:50:15.095623  919134 main.go:141] libmachine: (no-preload-301942) DBG | Skipping /home - not owner
	I0429 13:50:15.097041  919134 main.go:141] libmachine: (no-preload-301942) define libvirt domain using xml: 
	I0429 13:50:15.097467  919134 main.go:141] libmachine: (no-preload-301942) <domain type='kvm'>
	I0429 13:50:15.097496  919134 main.go:141] libmachine: (no-preload-301942)   <name>no-preload-301942</name>
	I0429 13:50:15.097508  919134 main.go:141] libmachine: (no-preload-301942)   <memory unit='MiB'>2200</memory>
	I0429 13:50:15.097520  919134 main.go:141] libmachine: (no-preload-301942)   <vcpu>2</vcpu>
	I0429 13:50:15.097529  919134 main.go:141] libmachine: (no-preload-301942)   <features>
	I0429 13:50:15.097547  919134 main.go:141] libmachine: (no-preload-301942)     <acpi/>
	I0429 13:50:15.097555  919134 main.go:141] libmachine: (no-preload-301942)     <apic/>
	I0429 13:50:15.097563  919134 main.go:141] libmachine: (no-preload-301942)     <pae/>
	I0429 13:50:15.097593  919134 main.go:141] libmachine: (no-preload-301942)     
	I0429 13:50:15.097619  919134 main.go:141] libmachine: (no-preload-301942)   </features>
	I0429 13:50:15.097630  919134 main.go:141] libmachine: (no-preload-301942)   <cpu mode='host-passthrough'>
	I0429 13:50:15.097637  919134 main.go:141] libmachine: (no-preload-301942)   
	I0429 13:50:15.097647  919134 main.go:141] libmachine: (no-preload-301942)   </cpu>
	I0429 13:50:15.097653  919134 main.go:141] libmachine: (no-preload-301942)   <os>
	I0429 13:50:15.097667  919134 main.go:141] libmachine: (no-preload-301942)     <type>hvm</type>
	I0429 13:50:15.097674  919134 main.go:141] libmachine: (no-preload-301942)     <boot dev='cdrom'/>
	I0429 13:50:15.097686  919134 main.go:141] libmachine: (no-preload-301942)     <boot dev='hd'/>
	I0429 13:50:15.097697  919134 main.go:141] libmachine: (no-preload-301942)     <bootmenu enable='no'/>
	I0429 13:50:15.097705  919134 main.go:141] libmachine: (no-preload-301942)   </os>
	I0429 13:50:15.097719  919134 main.go:141] libmachine: (no-preload-301942)   <devices>
	I0429 13:50:15.097728  919134 main.go:141] libmachine: (no-preload-301942)     <disk type='file' device='cdrom'>
	I0429 13:50:15.097742  919134 main.go:141] libmachine: (no-preload-301942)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/boot2docker.iso'/>
	I0429 13:50:15.097787  919134 main.go:141] libmachine: (no-preload-301942)       <target dev='hdc' bus='scsi'/>
	I0429 13:50:15.097809  919134 main.go:141] libmachine: (no-preload-301942)       <readonly/>
	I0429 13:50:15.097824  919134 main.go:141] libmachine: (no-preload-301942)     </disk>
	I0429 13:50:15.097832  919134 main.go:141] libmachine: (no-preload-301942)     <disk type='file' device='disk'>
	I0429 13:50:15.097847  919134 main.go:141] libmachine: (no-preload-301942)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 13:50:15.097862  919134 main.go:141] libmachine: (no-preload-301942)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/no-preload-301942.rawdisk'/>
	I0429 13:50:15.097876  919134 main.go:141] libmachine: (no-preload-301942)       <target dev='hda' bus='virtio'/>
	I0429 13:50:15.097884  919134 main.go:141] libmachine: (no-preload-301942)     </disk>
	I0429 13:50:15.097895  919134 main.go:141] libmachine: (no-preload-301942)     <interface type='network'>
	I0429 13:50:15.097903  919134 main.go:141] libmachine: (no-preload-301942)       <source network='mk-no-preload-301942'/>
	I0429 13:50:15.097911  919134 main.go:141] libmachine: (no-preload-301942)       <model type='virtio'/>
	I0429 13:50:15.097921  919134 main.go:141] libmachine: (no-preload-301942)     </interface>
	I0429 13:50:15.097931  919134 main.go:141] libmachine: (no-preload-301942)     <interface type='network'>
	I0429 13:50:15.097941  919134 main.go:141] libmachine: (no-preload-301942)       <source network='default'/>
	I0429 13:50:15.097950  919134 main.go:141] libmachine: (no-preload-301942)       <model type='virtio'/>
	I0429 13:50:15.097965  919134 main.go:141] libmachine: (no-preload-301942)     </interface>
	I0429 13:50:15.097993  919134 main.go:141] libmachine: (no-preload-301942)     <serial type='pty'>
	I0429 13:50:15.098017  919134 main.go:141] libmachine: (no-preload-301942)       <target port='0'/>
	I0429 13:50:15.098042  919134 main.go:141] libmachine: (no-preload-301942)     </serial>
	I0429 13:50:15.098054  919134 main.go:141] libmachine: (no-preload-301942)     <console type='pty'>
	I0429 13:50:15.098063  919134 main.go:141] libmachine: (no-preload-301942)       <target type='serial' port='0'/>
	I0429 13:50:15.098073  919134 main.go:141] libmachine: (no-preload-301942)     </console>
	I0429 13:50:15.098081  919134 main.go:141] libmachine: (no-preload-301942)     <rng model='virtio'>
	I0429 13:50:15.098098  919134 main.go:141] libmachine: (no-preload-301942)       <backend model='random'>/dev/random</backend>
	I0429 13:50:15.098110  919134 main.go:141] libmachine: (no-preload-301942)     </rng>
	I0429 13:50:15.098117  919134 main.go:141] libmachine: (no-preload-301942)     
	I0429 13:50:15.098125  919134 main.go:141] libmachine: (no-preload-301942)     
	I0429 13:50:15.098147  919134 main.go:141] libmachine: (no-preload-301942)   </devices>
	I0429 13:50:15.098161  919134 main.go:141] libmachine: (no-preload-301942) </domain>
	I0429 13:50:15.098177  919134 main.go:141] libmachine: (no-preload-301942) 
	I0429 13:50:15.103207  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:f6:20:b0 in network default
	I0429 13:50:15.103953  919134 main.go:141] libmachine: (no-preload-301942) Ensuring networks are active...
	I0429 13:50:15.103986  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:15.105239  919134 main.go:141] libmachine: (no-preload-301942) Ensuring network default is active
	I0429 13:50:15.105654  919134 main.go:141] libmachine: (no-preload-301942) Ensuring network mk-no-preload-301942 is active
	I0429 13:50:15.106238  919134 main.go:141] libmachine: (no-preload-301942) Getting domain xml...
	I0429 13:50:15.107141  919134 main.go:141] libmachine: (no-preload-301942) Creating domain...
	I0429 13:50:15.816897  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0429 13:50:15.816939  919134 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.4240432s
	I0429 13:50:15.816960  919134 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0429 13:50:16.323723  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 exists
	I0429 13:50:16.323758  919134 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0" took 1.930893959s
	I0429 13:50:16.323775  919134 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0429 13:50:16.328410  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0429 13:50:16.328442  919134 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0" took 1.935615361s
	I0429 13:50:16.328454  919134 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0429 13:50:16.337237  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0429 13:50:16.337272  919134 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0" took 1.944348532s
	I0429 13:50:16.337288  919134 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0429 13:50:16.350125  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0429 13:50:16.350158  919134 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0" took 1.957379971s
	I0429 13:50:16.350170  919134 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0429 13:50:16.350190  919134 cache.go:87] Successfully saved all images to host disk.
	I0429 13:50:17.250234  919134 main.go:141] libmachine: (no-preload-301942) Waiting to get IP...
	I0429 13:50:17.251091  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:17.251602  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:17.251627  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:17.251585  919169 retry.go:31] will retry after 281.994457ms: waiting for machine to come up
	I0429 13:50:17.535540  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:17.536080  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:17.536107  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:17.536045  919169 retry.go:31] will retry after 322.982246ms: waiting for machine to come up
	I0429 13:50:17.860643  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:17.861388  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:17.861420  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:17.861335  919169 retry.go:31] will retry after 446.702671ms: waiting for machine to come up
	I0429 13:50:18.310456  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:18.311239  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:18.311273  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:18.311193  919169 retry.go:31] will retry after 497.51088ms: waiting for machine to come up
	I0429 13:50:18.809928  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:18.810462  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:18.810493  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:18.810410  919169 retry.go:31] will retry after 509.953214ms: waiting for machine to come up
	I0429 13:50:17.478893  919444 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:50:17.478975  919444 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 13:50:17.478994  919444 cache.go:56] Caching tarball of preloaded images
	I0429 13:50:17.479144  919444 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 13:50:17.479168  919444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 13:50:17.479334  919444 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/config.json ...
	I0429 13:50:17.479399  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/config.json: {Name:mkca47fb2fbc9a743c10ae5b852ab96d0c7c3058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:50:17.479660  919444 start.go:360] acquireMachinesLock for embed-certs-954581: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:50:18.100289  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:20.595440  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:19.322433  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:19.323177  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:19.323215  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:19.323077  919169 retry.go:31] will retry after 705.195479ms: waiting for machine to come up
	I0429 13:50:20.029430  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:20.029952  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:20.029983  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:20.029895  919169 retry.go:31] will retry after 1.070457514s: waiting for machine to come up
	I0429 13:50:21.102218  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:21.102792  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:21.102853  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:21.102755  919169 retry.go:31] will retry after 1.2238304s: waiting for machine to come up
	I0429 13:50:22.329052  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:22.329671  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:22.329697  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:22.329606  919169 retry.go:31] will retry after 1.385246734s: waiting for machine to come up
	I0429 13:50:23.716577  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:23.717345  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:23.717379  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:23.717287  919169 retry.go:31] will retry after 1.569748013s: waiting for machine to come up
	I0429 13:50:23.096395  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:25.594532  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:27.595225  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:25.288695  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:25.289272  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:25.289302  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:25.289224  919169 retry.go:31] will retry after 1.89390905s: waiting for machine to come up
	I0429 13:50:27.185386  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:27.185906  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:27.185945  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:27.185820  919169 retry.go:31] will retry after 3.391341067s: waiting for machine to come up
	I0429 13:50:30.095415  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:32.594923  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:30.578512  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:30.579236  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:30.579292  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:30.579152  919169 retry.go:31] will retry after 3.587589732s: waiting for machine to come up
	I0429 13:50:34.171105  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:34.171890  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:34.171924  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:34.171817  919169 retry.go:31] will retry after 5.321567172s: waiting for machine to come up
	I0429 13:50:34.595127  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:36.595603  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:39.664237  916079 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 13:50:39.664595  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:50:39.664825  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:50:39.093863  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:41.594132  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:44.087226  919444 start.go:364] duration metric: took 26.607508523s to acquireMachinesLock for "embed-certs-954581"
	I0429 13:50:44.087305  919444 start.go:93] Provisioning new machine with config: &{Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:50:44.087479  919444 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 13:50:39.495881  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:39.496984  919134 main.go:141] libmachine: (no-preload-301942) Found IP for machine: 192.168.72.248
	I0429 13:50:39.497036  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has current primary IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:39.497048  919134 main.go:141] libmachine: (no-preload-301942) Reserving static IP address...
	I0429 13:50:39.497537  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find host DHCP lease matching {name: "no-preload-301942", mac: "52:54:00:30:7e:ee", ip: "192.168.72.248"} in network mk-no-preload-301942
	I0429 13:50:39.604213  919134 main.go:141] libmachine: (no-preload-301942) DBG | Getting to WaitForSSH function...
	I0429 13:50:39.604251  919134 main.go:141] libmachine: (no-preload-301942) Reserved static IP address: 192.168.72.248
	I0429 13:50:39.604264  919134 main.go:141] libmachine: (no-preload-301942) Waiting for SSH to be available...
	I0429 13:50:39.607385  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:39.607937  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942
	I0429 13:50:39.607966  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find defined IP address of network mk-no-preload-301942 interface with MAC address 52:54:00:30:7e:ee
	I0429 13:50:39.608095  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH client type: external
	I0429 13:50:39.608146  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa (-rw-------)
	I0429 13:50:39.608195  919134 main.go:141] libmachine: (no-preload-301942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:50:39.608219  919134 main.go:141] libmachine: (no-preload-301942) DBG | About to run SSH command:
	I0429 13:50:39.608238  919134 main.go:141] libmachine: (no-preload-301942) DBG | exit 0
	I0429 13:50:39.612398  919134 main.go:141] libmachine: (no-preload-301942) DBG | SSH cmd err, output: exit status 255: 
	I0429 13:50:39.612434  919134 main.go:141] libmachine: (no-preload-301942) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 13:50:39.612467  919134 main.go:141] libmachine: (no-preload-301942) DBG | command : exit 0
	I0429 13:50:39.612478  919134 main.go:141] libmachine: (no-preload-301942) DBG | err     : exit status 255
	I0429 13:50:39.612486  919134 main.go:141] libmachine: (no-preload-301942) DBG | output  : 
	I0429 13:50:42.613141  919134 main.go:141] libmachine: (no-preload-301942) DBG | Getting to WaitForSSH function...
	I0429 13:50:42.616037  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.616489  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.616521  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.616752  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH client type: external
	I0429 13:50:42.616800  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa (-rw-------)
	I0429 13:50:42.616838  919134 main.go:141] libmachine: (no-preload-301942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:50:42.616853  919134 main.go:141] libmachine: (no-preload-301942) DBG | About to run SSH command:
	I0429 13:50:42.616888  919134 main.go:141] libmachine: (no-preload-301942) DBG | exit 0
	I0429 13:50:42.740168  919134 main.go:141] libmachine: (no-preload-301942) DBG | SSH cmd err, output: <nil>: 
	I0429 13:50:42.740617  919134 main.go:141] libmachine: (no-preload-301942) KVM machine creation complete!
	I0429 13:50:42.740847  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetConfigRaw
	I0429 13:50:42.741473  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:42.741723  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:42.741935  919134 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 13:50:42.741952  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:50:42.743248  919134 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 13:50:42.743268  919134 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 13:50:42.743276  919134 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 13:50:42.743284  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:42.745486  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.745908  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.745963  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.746103  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:42.746325  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.746498  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.746646  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:42.746854  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:42.747114  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:42.747127  919134 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 13:50:42.851272  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:50:42.851306  919134 main.go:141] libmachine: Detecting the provisioner...
	I0429 13:50:42.851318  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:42.854501  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.854939  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.854967  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.855177  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:42.855428  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.855614  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.855788  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:42.855976  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:42.856225  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:42.856240  919134 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 13:50:42.960863  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 13:50:42.961027  919134 main.go:141] libmachine: found compatible host: buildroot
	I0429 13:50:42.961048  919134 main.go:141] libmachine: Provisioning with buildroot...
	I0429 13:50:42.961061  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:42.961409  919134 buildroot.go:166] provisioning hostname "no-preload-301942"
	I0429 13:50:42.961443  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:42.961668  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:42.965374  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.965807  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.965858  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.966159  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:42.966442  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.966670  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.966824  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:42.967046  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:42.967254  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:42.967268  919134 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-301942 && echo "no-preload-301942" | sudo tee /etc/hostname
	I0429 13:50:43.089711  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-301942
	
	I0429 13:50:43.089779  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.093425  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.093857  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.093912  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.094124  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.094376  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.094579  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.094715  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.094944  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:43.095144  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:43.095162  919134 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-301942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-301942/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-301942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:50:43.211925  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:50:43.211960  919134 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:50:43.212002  919134 buildroot.go:174] setting up certificates
	I0429 13:50:43.212034  919134 provision.go:84] configureAuth start
	I0429 13:50:43.212046  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:43.212387  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:43.215462  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.215970  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.216001  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.216365  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.219318  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.219736  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.219759  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.219939  919134 provision.go:143] copyHostCerts
	I0429 13:50:43.220021  919134 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:50:43.220036  919134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:50:43.220126  919134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:50:43.220231  919134 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:50:43.220243  919134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:50:43.220285  919134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:50:43.220361  919134 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:50:43.220372  919134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:50:43.220408  919134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:50:43.220469  919134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.no-preload-301942 san=[127.0.0.1 192.168.72.248 localhost minikube no-preload-301942]
	I0429 13:50:43.366687  919134 provision.go:177] copyRemoteCerts
	I0429 13:50:43.366767  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:50:43.366795  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.370227  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.370514  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.370549  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.370809  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.371087  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.371290  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.371506  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:43.455647  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:50:43.485751  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 13:50:43.514982  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 13:50:43.547132  919134 provision.go:87] duration metric: took 335.077968ms to configureAuth
	I0429 13:50:43.547181  919134 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:50:43.547440  919134 config.go:182] Loaded profile config "no-preload-301942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:50:43.547589  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.550839  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.551280  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.551312  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.551550  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.551807  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.552018  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.552204  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.552410  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:43.552642  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:43.552664  919134 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:50:43.833557  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:50:43.833590  919134 main.go:141] libmachine: Checking connection to Docker...
	I0429 13:50:43.833599  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetURL
	I0429 13:50:43.834855  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using libvirt version 6000000
	I0429 13:50:43.837263  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.837642  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.837674  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.837907  919134 main.go:141] libmachine: Docker is up and running!
	I0429 13:50:43.837922  919134 main.go:141] libmachine: Reticulating splines...
	I0429 13:50:43.837930  919134 client.go:171] duration metric: took 29.420004419s to LocalClient.Create
	I0429 13:50:43.837958  919134 start.go:167] duration metric: took 29.420068339s to libmachine.API.Create "no-preload-301942"
	I0429 13:50:43.837979  919134 start.go:293] postStartSetup for "no-preload-301942" (driver="kvm2")
	I0429 13:50:43.837989  919134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:50:43.838008  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:43.838266  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:50:43.838292  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.840823  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.841166  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.841199  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.841350  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.841546  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.841745  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.841892  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:43.925132  919134 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:50:43.930411  919134 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:50:43.930445  919134 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:50:43.930527  919134 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:50:43.930623  919134 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:50:43.930723  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:50:43.942672  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:50:43.971801  919134 start.go:296] duration metric: took 133.803879ms for postStartSetup
	I0429 13:50:43.971890  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetConfigRaw
	I0429 13:50:43.972556  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:43.975803  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.976229  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.976260  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.976596  919134 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/config.json ...
	I0429 13:50:43.976868  919134 start.go:128] duration metric: took 29.583714024s to createHost
	I0429 13:50:43.976900  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.979259  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.979636  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.979676  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.979836  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.980088  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.980243  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.980361  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.980517  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:43.980720  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:43.980736  919134 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:50:44.087014  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714398644.076196611
	
	I0429 13:50:44.087051  919134 fix.go:216] guest clock: 1714398644.076196611
	I0429 13:50:44.087059  919134 fix.go:229] Guest: 2024-04-29 13:50:44.076196611 +0000 UTC Remote: 2024-04-29 13:50:43.976884358 +0000 UTC m=+29.734542335 (delta=99.312253ms)
	I0429 13:50:44.087088  919134 fix.go:200] guest clock delta is within tolerance: 99.312253ms
	I0429 13:50:44.087095  919134 start.go:83] releasing machines lock for "no-preload-301942", held for 29.694148543s
	I0429 13:50:44.087135  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.087477  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:44.091052  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.091524  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:44.091555  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.091785  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.092442  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.092678  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.092782  919134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:50:44.092842  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:44.092962  919134 ssh_runner.go:195] Run: cat /version.json
	I0429 13:50:44.092982  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:44.096519  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.096813  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.096868  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:44.096891  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.097103  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:44.097327  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:44.097344  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:44.097361  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.097505  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:44.097661  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:44.097757  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:44.097818  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:44.097945  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:44.098057  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:44.214425  919134 ssh_runner.go:195] Run: systemctl --version
	I0429 13:50:44.223033  919134 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:50:44.405327  919134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:50:44.412344  919134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:50:44.412416  919134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:50:44.431971  919134 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:50:44.432013  919134 start.go:494] detecting cgroup driver to use...
	I0429 13:50:44.432107  919134 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:50:44.451682  919134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:50:44.468282  919134 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:50:44.468358  919134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:50:44.483997  919134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:50:44.500600  919134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:50:44.622246  919134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:50:44.786391  919134 docker.go:233] disabling docker service ...
	I0429 13:50:44.786480  919134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:50:44.803848  919134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:50:44.819965  919134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:50:44.970777  919134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:50:45.096781  919134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:50:45.115097  919134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:50:45.138507  919134 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:50:45.138569  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.151154  919134 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:50:45.151259  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.163661  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.179765  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.195459  919134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:50:45.210796  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.222690  919134 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.245769  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.258607  919134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:50:45.273155  919134 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 13:50:45.273257  919134 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 13:50:45.294632  919134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:50:45.306540  919134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:50:45.449440  919134 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:50:45.615354  919134 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:50:45.615506  919134 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:50:45.621071  919134 start.go:562] Will wait 60s for crictl version
	I0429 13:50:45.621161  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:45.625720  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:50:45.671281  919134 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:50:45.671483  919134 ssh_runner.go:195] Run: crio --version
	I0429 13:50:45.711166  919134 ssh_runner.go:195] Run: crio --version
	I0429 13:50:45.746855  919134 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:50:44.090086  919444 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 13:50:44.090318  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:50:44.090378  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:50:44.111941  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0429 13:50:44.112397  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:50:44.113093  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:50:44.113125  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:50:44.113486  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:50:44.113684  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:50:44.113843  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:50:44.114029  919444 start.go:159] libmachine.API.Create for "embed-certs-954581" (driver="kvm2")
	I0429 13:50:44.114063  919444 client.go:168] LocalClient.Create starting
	I0429 13:50:44.114100  919444 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 13:50:44.114155  919444 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:44.114175  919444 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:44.114247  919444 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 13:50:44.114274  919444 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:44.114291  919444 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:44.114318  919444 main.go:141] libmachine: Running pre-create checks...
	I0429 13:50:44.114330  919444 main.go:141] libmachine: (embed-certs-954581) Calling .PreCreateCheck
	I0429 13:50:44.114830  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetConfigRaw
	I0429 13:50:44.115484  919444 main.go:141] libmachine: Creating machine...
	I0429 13:50:44.115504  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Create
	I0429 13:50:44.115675  919444 main.go:141] libmachine: (embed-certs-954581) Creating KVM machine...
	I0429 13:50:44.117218  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found existing default KVM network
	I0429 13:50:44.118861  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.118696  919618 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0429 13:50:44.118917  919444 main.go:141] libmachine: (embed-certs-954581) DBG | created network xml: 
	I0429 13:50:44.118931  919444 main.go:141] libmachine: (embed-certs-954581) DBG | <network>
	I0429 13:50:44.118960  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   <name>mk-embed-certs-954581</name>
	I0429 13:50:44.118974  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   <dns enable='no'/>
	I0429 13:50:44.118984  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   
	I0429 13:50:44.118999  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 13:50:44.119009  919444 main.go:141] libmachine: (embed-certs-954581) DBG |     <dhcp>
	I0429 13:50:44.119023  919444 main.go:141] libmachine: (embed-certs-954581) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 13:50:44.119037  919444 main.go:141] libmachine: (embed-certs-954581) DBG |     </dhcp>
	I0429 13:50:44.119055  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   </ip>
	I0429 13:50:44.119065  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   
	I0429 13:50:44.119074  919444 main.go:141] libmachine: (embed-certs-954581) DBG | </network>
	I0429 13:50:44.119084  919444 main.go:141] libmachine: (embed-certs-954581) DBG | 
	I0429 13:50:44.124916  919444 main.go:141] libmachine: (embed-certs-954581) DBG | trying to create private KVM network mk-embed-certs-954581 192.168.39.0/24...
	I0429 13:50:44.216272  919444 main.go:141] libmachine: (embed-certs-954581) DBG | private KVM network mk-embed-certs-954581 192.168.39.0/24 created
	I0429 13:50:44.216308  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.216216  919618 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:44.216331  919444 main.go:141] libmachine: (embed-certs-954581) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581 ...
	I0429 13:50:44.216342  919444 main.go:141] libmachine: (embed-certs-954581) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 13:50:44.216373  919444 main.go:141] libmachine: (embed-certs-954581) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 13:50:44.509119  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.508943  919618 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa...
	I0429 13:50:44.591508  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.591321  919618 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/embed-certs-954581.rawdisk...
	I0429 13:50:44.591544  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Writing magic tar header
	I0429 13:50:44.591562  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Writing SSH key tar header
	I0429 13:50:44.591708  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.591574  919618 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581 ...
	I0429 13:50:44.591771  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581
	I0429 13:50:44.591785  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581 (perms=drwx------)
	I0429 13:50:44.591804  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 13:50:44.591818  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 13:50:44.591829  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 13:50:44.591843  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:44.591854  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 13:50:44.591868  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 13:50:44.591877  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 13:50:44.591888  919444 main.go:141] libmachine: (embed-certs-954581) Creating domain...
	I0429 13:50:44.591906  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 13:50:44.591915  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 13:50:44.591935  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins
	I0429 13:50:44.591952  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home
	I0429 13:50:44.591965  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Skipping /home - not owner
	I0429 13:50:44.593385  919444 main.go:141] libmachine: (embed-certs-954581) define libvirt domain using xml: 
	I0429 13:50:44.593425  919444 main.go:141] libmachine: (embed-certs-954581) <domain type='kvm'>
	I0429 13:50:44.593439  919444 main.go:141] libmachine: (embed-certs-954581)   <name>embed-certs-954581</name>
	I0429 13:50:44.593452  919444 main.go:141] libmachine: (embed-certs-954581)   <memory unit='MiB'>2200</memory>
	I0429 13:50:44.593463  919444 main.go:141] libmachine: (embed-certs-954581)   <vcpu>2</vcpu>
	I0429 13:50:44.593470  919444 main.go:141] libmachine: (embed-certs-954581)   <features>
	I0429 13:50:44.593481  919444 main.go:141] libmachine: (embed-certs-954581)     <acpi/>
	I0429 13:50:44.593492  919444 main.go:141] libmachine: (embed-certs-954581)     <apic/>
	I0429 13:50:44.593501  919444 main.go:141] libmachine: (embed-certs-954581)     <pae/>
	I0429 13:50:44.593522  919444 main.go:141] libmachine: (embed-certs-954581)     
	I0429 13:50:44.593557  919444 main.go:141] libmachine: (embed-certs-954581)   </features>
	I0429 13:50:44.593582  919444 main.go:141] libmachine: (embed-certs-954581)   <cpu mode='host-passthrough'>
	I0429 13:50:44.593596  919444 main.go:141] libmachine: (embed-certs-954581)   
	I0429 13:50:44.593607  919444 main.go:141] libmachine: (embed-certs-954581)   </cpu>
	I0429 13:50:44.593620  919444 main.go:141] libmachine: (embed-certs-954581)   <os>
	I0429 13:50:44.593631  919444 main.go:141] libmachine: (embed-certs-954581)     <type>hvm</type>
	I0429 13:50:44.593642  919444 main.go:141] libmachine: (embed-certs-954581)     <boot dev='cdrom'/>
	I0429 13:50:44.593653  919444 main.go:141] libmachine: (embed-certs-954581)     <boot dev='hd'/>
	I0429 13:50:44.593739  919444 main.go:141] libmachine: (embed-certs-954581)     <bootmenu enable='no'/>
	I0429 13:50:44.593775  919444 main.go:141] libmachine: (embed-certs-954581)   </os>
	I0429 13:50:44.593790  919444 main.go:141] libmachine: (embed-certs-954581)   <devices>
	I0429 13:50:44.593809  919444 main.go:141] libmachine: (embed-certs-954581)     <disk type='file' device='cdrom'>
	I0429 13:50:44.593823  919444 main.go:141] libmachine: (embed-certs-954581)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/boot2docker.iso'/>
	I0429 13:50:44.593846  919444 main.go:141] libmachine: (embed-certs-954581)       <target dev='hdc' bus='scsi'/>
	I0429 13:50:44.593859  919444 main.go:141] libmachine: (embed-certs-954581)       <readonly/>
	I0429 13:50:44.593878  919444 main.go:141] libmachine: (embed-certs-954581)     </disk>
	I0429 13:50:44.593891  919444 main.go:141] libmachine: (embed-certs-954581)     <disk type='file' device='disk'>
	I0429 13:50:44.593907  919444 main.go:141] libmachine: (embed-certs-954581)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 13:50:44.593935  919444 main.go:141] libmachine: (embed-certs-954581)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/embed-certs-954581.rawdisk'/>
	I0429 13:50:44.593951  919444 main.go:141] libmachine: (embed-certs-954581)       <target dev='hda' bus='virtio'/>
	I0429 13:50:44.593963  919444 main.go:141] libmachine: (embed-certs-954581)     </disk>
	I0429 13:50:44.593974  919444 main.go:141] libmachine: (embed-certs-954581)     <interface type='network'>
	I0429 13:50:44.593983  919444 main.go:141] libmachine: (embed-certs-954581)       <source network='mk-embed-certs-954581'/>
	I0429 13:50:44.593993  919444 main.go:141] libmachine: (embed-certs-954581)       <model type='virtio'/>
	I0429 13:50:44.594000  919444 main.go:141] libmachine: (embed-certs-954581)     </interface>
	I0429 13:50:44.594011  919444 main.go:141] libmachine: (embed-certs-954581)     <interface type='network'>
	I0429 13:50:44.594027  919444 main.go:141] libmachine: (embed-certs-954581)       <source network='default'/>
	I0429 13:50:44.594038  919444 main.go:141] libmachine: (embed-certs-954581)       <model type='virtio'/>
	I0429 13:50:44.594046  919444 main.go:141] libmachine: (embed-certs-954581)     </interface>
	I0429 13:50:44.594056  919444 main.go:141] libmachine: (embed-certs-954581)     <serial type='pty'>
	I0429 13:50:44.594064  919444 main.go:141] libmachine: (embed-certs-954581)       <target port='0'/>
	I0429 13:50:44.594073  919444 main.go:141] libmachine: (embed-certs-954581)     </serial>
	I0429 13:50:44.594080  919444 main.go:141] libmachine: (embed-certs-954581)     <console type='pty'>
	I0429 13:50:44.594095  919444 main.go:141] libmachine: (embed-certs-954581)       <target type='serial' port='0'/>
	I0429 13:50:44.594106  919444 main.go:141] libmachine: (embed-certs-954581)     </console>
	I0429 13:50:44.594113  919444 main.go:141] libmachine: (embed-certs-954581)     <rng model='virtio'>
	I0429 13:50:44.594125  919444 main.go:141] libmachine: (embed-certs-954581)       <backend model='random'>/dev/random</backend>
	I0429 13:50:44.594142  919444 main.go:141] libmachine: (embed-certs-954581)     </rng>
	I0429 13:50:44.594165  919444 main.go:141] libmachine: (embed-certs-954581)     
	I0429 13:50:44.594198  919444 main.go:141] libmachine: (embed-certs-954581)     
	I0429 13:50:44.594209  919444 main.go:141] libmachine: (embed-certs-954581)   </devices>
	I0429 13:50:44.594220  919444 main.go:141] libmachine: (embed-certs-954581) </domain>
	I0429 13:50:44.594234  919444 main.go:141] libmachine: (embed-certs-954581) 
	I0429 13:50:44.598938  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:9a:a9:58 in network default
	I0429 13:50:44.599584  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:44.599601  919444 main.go:141] libmachine: (embed-certs-954581) Ensuring networks are active...
	I0429 13:50:44.600452  919444 main.go:141] libmachine: (embed-certs-954581) Ensuring network default is active
	I0429 13:50:44.600784  919444 main.go:141] libmachine: (embed-certs-954581) Ensuring network mk-embed-certs-954581 is active
	I0429 13:50:44.601376  919444 main.go:141] libmachine: (embed-certs-954581) Getting domain xml...
	I0429 13:50:44.602165  919444 main.go:141] libmachine: (embed-certs-954581) Creating domain...
	I0429 13:50:46.030304  919444 main.go:141] libmachine: (embed-certs-954581) Waiting to get IP...
	I0429 13:50:46.031166  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.031816  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.031848  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.031781  919618 retry.go:31] will retry after 254.67243ms: waiting for machine to come up
	I0429 13:50:46.288591  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.289494  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.289528  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.289429  919618 retry.go:31] will retry after 297.459928ms: waiting for machine to come up
	I0429 13:50:46.589111  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.589816  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.589848  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.589774  919618 retry.go:31] will retry after 315.635792ms: waiting for machine to come up
	I0429 13:50:46.907825  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.908923  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.908956  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.908839  919618 retry.go:31] will retry after 450.723175ms: waiting for machine to come up
	I0429 13:50:47.361803  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:47.362390  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:47.362427  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:47.362341  919618 retry.go:31] will retry after 633.317544ms: waiting for machine to come up
	I0429 13:50:44.665038  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:50:44.665327  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:50:43.595040  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:46.100624  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:45.748432  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:45.751874  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:45.752355  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:45.752385  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:45.752670  919134 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 13:50:45.757915  919134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:50:45.780500  919134 kubeadm.go:877] updating cluster {Name:no-preload-301942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-301942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.248 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:50:45.780669  919134 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:50:45.780712  919134 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:50:45.821848  919134 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 13:50:45.821887  919134 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 13:50:45.821975  919134 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:45.822010  919134 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:45.822121  919134 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:45.822162  919134 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 13:50:45.822158  919134 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:45.822162  919134 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:45.822465  919134 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:45.822733  919134 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:45.823528  919134 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:45.823583  919134 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:45.823585  919134 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:45.823528  919134 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 13:50:45.823703  919134 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:45.823703  919134 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:45.823772  919134 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:45.824256  919134 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:45.956948  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:45.958745  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:45.963110  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:45.979658  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:45.982114  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:45.986549  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:45.999108  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 13:50:46.036991  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:46.145409  919134 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 13:50:46.145479  919134 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:46.145417  919134 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 13:50:46.145543  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.145543  919134 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:46.145594  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.183007  919134 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 13:50:46.183068  919134 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:46.183124  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.237111  919134 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 13:50:46.237172  919134 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:46.237237  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.238614  919134 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 13:50:46.238660  919134 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 13:50:46.238698  919134 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:46.238763  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.238663  919134 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:46.238854  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.253235  919134 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0429 13:50:46.253304  919134 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0429 13:50:46.253366  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.262094  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:46.262145  919134 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 13:50:46.262181  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:46.262189  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:46.262199  919134 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:46.262217  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:46.262236  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.262238  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:46.262244  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:46.264771  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0429 13:50:46.419685  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 13:50:46.419828  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 13:50:46.419897  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:46.420241  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 13:50:46.420347  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 13:50:46.452026  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 13:50:46.452151  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 13:50:46.452194  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 13:50:46.452257  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 13:50:46.452165  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 13:50:46.452261  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 13:50:46.452033  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 13:50:46.452496  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 13:50:46.474013  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0429 13:50:46.474072  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0429 13:50:46.474148  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0429 13:50:46.474261  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0429 13:50:46.527195  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0429 13:50:46.527258  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0429 13:50:46.527279  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 13:50:46.527449  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 13:50:46.527645  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0': No such file or directory
	I0429 13:50:46.527675  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0 (32674304 bytes)
	I0429 13:50:46.527719  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0': No such file or directory
	I0429 13:50:46.527752  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0 (29022720 bytes)
	I0429 13:50:46.527766  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0': No such file or directory
	I0429 13:50:46.527842  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0429 13:50:46.527864  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0 (31041024 bytes)
	I0429 13:50:46.527886  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0429 13:50:46.527772  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0': No such file or directory
	I0429 13:50:46.527931  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0 (19219456 bytes)
	I0429 13:50:46.597013  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0429 13:50:46.597072  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0429 13:50:46.659810  919134 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0429 13:50:46.659909  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0429 13:50:47.442188  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0429 13:50:47.442244  919134 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 13:50:47.442305  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 13:50:47.997928  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:47.998457  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:47.998491  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:47.998402  919618 retry.go:31] will retry after 649.94283ms: waiting for machine to come up
	I0429 13:50:48.650513  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:48.651154  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:48.651201  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:48.651093  919618 retry.go:31] will retry after 1.191513652s: waiting for machine to come up
	I0429 13:50:49.844201  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:49.844874  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:49.844924  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:49.844827  919618 retry.go:31] will retry after 1.445213488s: waiting for machine to come up
	I0429 13:50:51.291628  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:51.292244  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:51.292273  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:51.292180  919618 retry.go:31] will retry after 1.132788812s: waiting for machine to come up
	I0429 13:50:48.595575  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:50.596709  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:52.088142  905474 pod_ready.go:81] duration metric: took 4m0.00111967s for pod "kube-proxy-x79g5" in "kube-system" namespace to be "Ready" ...
	E0429 13:50:52.088182  905474 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "kube-proxy-x79g5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 13:50:52.088226  905474 pod_ready.go:38] duration metric: took 4m8.040584265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:50:52.088261  905474 kubeadm.go:591] duration metric: took 4m20.995114758s to restartPrimaryControlPlane
	W0429 13:50:52.088344  905474 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 13:50:52.088383  905474 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 13:50:49.373350  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.931007175s)
	I0429 13:50:49.373404  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 13:50:49.373444  919134 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 13:50:49.373517  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 13:50:51.574587  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.201020256s)
	I0429 13:50:51.574634  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 13:50:51.574669  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 13:50:51.574725  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 13:50:54.167651  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.592875106s)
	I0429 13:50:54.167760  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 13:50:54.167856  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 13:50:54.167957  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 13:50:52.427521  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:52.428171  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:52.428206  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:52.428098  919618 retry.go:31] will retry after 1.655977729s: waiting for machine to come up
	I0429 13:50:54.086567  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:54.087168  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:54.087208  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:54.087065  919618 retry.go:31] will retry after 2.560858802s: waiting for machine to come up
	I0429 13:50:56.650010  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:56.650639  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:56.650670  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:56.650606  919618 retry.go:31] will retry after 3.561933506s: waiting for machine to come up
	I0429 13:50:54.664942  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:50:54.665230  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:50:56.456830  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.288835544s)
	I0429 13:50:56.456870  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 13:50:56.456903  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 13:50:56.456967  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 13:50:59.159348  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.702341445s)
	I0429 13:50:59.159408  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 13:50:59.159450  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 13:50:59.159540  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 13:51:00.214175  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:00.215095  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:51:00.215130  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:51:00.215032  919618 retry.go:31] will retry after 4.090008393s: waiting for machine to come up
	I0429 13:51:01.630259  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.470684812s)
	I0429 13:51:01.630307  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 13:51:01.630355  919134 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 13:51:01.630429  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 13:51:04.307738  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:04.308523  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:51:04.308557  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:51:04.308465  919618 retry.go:31] will retry after 4.84749516s: waiting for machine to come up
	I0429 13:51:05.919531  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.289070177s)
	I0429 13:51:05.919584  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 13:51:05.919626  919134 cache_images.go:123] Successfully loaded all cached images
	I0429 13:51:05.919634  919134 cache_images.go:92] duration metric: took 20.097728085s to LoadCachedImages
	I0429 13:51:05.919646  919134 kubeadm.go:928] updating node { 192.168.72.248 8443 v1.30.0 crio true true} ...
	I0429 13:51:05.919803  919134 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-301942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-301942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:51:05.919879  919134 ssh_runner.go:195] Run: crio config
	I0429 13:51:05.974005  919134 cni.go:84] Creating CNI manager for ""
	I0429 13:51:05.974035  919134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:05.974045  919134 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:51:05.974087  919134 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.248 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-301942 NodeName:no-preload-301942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:51:05.974283  919134 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-301942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:51:05.974366  919134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:51:05.986160  919134 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 13:51:05.986248  919134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 13:51:05.998572  919134 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 13:51:05.998593  919134 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 13:51:05.998689  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 13:51:05.998729  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 13:51:05.998578  919134 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 13:51:05.998863  919134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:51:06.007136  919134 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 13:51:06.007184  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 13:51:06.007417  919134 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 13:51:06.007451  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 13:51:06.025794  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 13:51:06.086289  919134 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 13:51:06.086353  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 13:51:06.893595  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:51:06.906058  919134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 13:51:06.926657  919134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:51:06.946685  919134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 13:51:06.967107  919134 ssh_runner.go:195] Run: grep 192.168.72.248	control-plane.minikube.internal$ /etc/hosts
	I0429 13:51:06.971804  919134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:51:06.987553  919134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:07.114075  919134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:07.133090  919134 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942 for IP: 192.168.72.248
	I0429 13:51:07.133121  919134 certs.go:194] generating shared ca certs ...
	I0429 13:51:07.133144  919134 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.133374  919134 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:51:07.133435  919134 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:51:07.133449  919134 certs.go:256] generating profile certs ...
	I0429 13:51:07.133557  919134 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.key
	I0429 13:51:07.133578  919134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.crt with IP's: []
	I0429 13:51:07.260676  919134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.crt ...
	I0429 13:51:07.260725  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.crt: {Name:mkb41553d5f76c917cb52d4509ddc4e17f9afc1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.260937  919134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.key ...
	I0429 13:51:07.260950  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.key: {Name:mkb3f0986631b04f64ba4141a2169b40442bc714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.261035  919134 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6
	I0429 13:51:07.261051  919134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.248]
	I0429 13:51:07.407286  919134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6 ...
	I0429 13:51:07.407332  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6: {Name:mk5dd232d592ed352287750e0666fbc7bd901057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.407559  919134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6 ...
	I0429 13:51:07.407578  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6: {Name:mkf859d5ccfc99dc996fedeca0ebc39e6ef5d546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.407656  919134 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt
	I0429 13:51:07.407733  919134 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key
	I0429 13:51:07.407795  919134 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key
	I0429 13:51:07.407815  919134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt with IP's: []
	I0429 13:51:07.623612  919134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt ...
	I0429 13:51:07.623651  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt: {Name:mk9e40ae0d749113254c26277758142a36d613ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.623846  919134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key ...
	I0429 13:51:07.623868  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key: {Name:mk0982c712a049a37b9b7146b2ab2d48ef52573a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.624140  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:51:07.624198  919134 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:51:07.624211  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:51:07.624231  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:51:07.624256  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:51:07.624278  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:51:07.624321  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:51:07.624966  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:51:07.654509  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:51:07.684299  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:51:07.712923  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:51:07.741578  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 13:51:07.769477  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 13:51:07.799962  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:51:07.836412  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 13:51:07.869109  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:51:07.898074  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:51:07.928928  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:51:07.956799  919134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:51:07.978029  919134 ssh_runner.go:195] Run: openssl version
	I0429 13:51:07.985120  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:51:08.000058  919134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:08.005630  919134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:08.005714  919134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:08.012796  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:51:08.028428  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:51:08.043976  919134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:51:08.049651  919134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:51:08.049732  919134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:51:08.056596  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:51:08.071493  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:51:08.085158  919134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:51:08.090671  919134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:51:08.090765  919134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:51:08.097571  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:51:08.111342  919134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:51:08.116148  919134 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 13:51:08.116214  919134 kubeadm.go:391] StartCluster: {Name:no-preload-301942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-301942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.248 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:51:08.116289  919134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:51:08.116341  919134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:51:08.160313  919134 cri.go:89] found id: ""
	I0429 13:51:08.160417  919134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:51:08.172726  919134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:51:08.185461  919134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:51:08.197620  919134 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:51:08.197648  919134 kubeadm.go:156] found existing configuration files:
	
	I0429 13:51:08.197702  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:51:08.209739  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:51:08.209903  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:51:08.222364  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:51:08.234620  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:51:08.234713  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:51:08.247057  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:51:08.259300  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:51:08.259402  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:51:08.273528  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:51:08.286481  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:51:08.286573  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:51:08.300494  919134 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:51:08.378445  919134 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 13:51:08.378541  919134 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:51:08.508565  919134 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:51:08.508693  919134 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:51:08.508794  919134 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:51:08.818230  919134 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:51:08.820730  919134 out.go:204]   - Generating certificates and keys ...
	I0429 13:51:08.820871  919134 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:51:08.820954  919134 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:51:08.955234  919134 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 13:51:09.042524  919134 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 13:51:09.386399  919134 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 13:51:09.590576  919134 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 13:51:09.668811  919134 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 13:51:09.669024  919134 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-301942] and IPs [192.168.72.248 127.0.0.1 ::1]
	I0429 13:51:09.895601  919134 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 13:51:09.895815  919134 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-301942] and IPs [192.168.72.248 127.0.0.1 ::1]
	I0429 13:51:10.190457  919134 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 13:51:10.537992  919134 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 13:51:10.953831  919134 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 13:51:10.955117  919134 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:51:11.073119  919134 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:51:11.208877  919134 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:51:11.398537  919134 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:51:11.610750  919134 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:51:11.896751  919134 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:51:11.897413  919134 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:51:11.900746  919134 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:51:09.158039  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.159434  919444 main.go:141] libmachine: (embed-certs-954581) Found IP for machine: 192.168.39.231
	I0429 13:51:09.159490  919444 main.go:141] libmachine: (embed-certs-954581) Reserving static IP address...
	I0429 13:51:09.159506  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has current primary IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.160452  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find host DHCP lease matching {name: "embed-certs-954581", mac: "52:54:00:dc:58:c7", ip: "192.168.39.231"} in network mk-embed-certs-954581
	I0429 13:51:09.282157  919444 main.go:141] libmachine: (embed-certs-954581) Reserved static IP address: 192.168.39.231
	I0429 13:51:09.282273  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Getting to WaitForSSH function...
	I0429 13:51:09.282302  919444 main.go:141] libmachine: (embed-certs-954581) Waiting for SSH to be available...
	I0429 13:51:09.286545  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.287499  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.287614  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.287677  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Using SSH client type: external
	I0429 13:51:09.287693  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa (-rw-------)
	I0429 13:51:09.287726  919444 main.go:141] libmachine: (embed-certs-954581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:51:09.287742  919444 main.go:141] libmachine: (embed-certs-954581) DBG | About to run SSH command:
	I0429 13:51:09.287754  919444 main.go:141] libmachine: (embed-certs-954581) DBG | exit 0
	I0429 13:51:09.416081  919444 main.go:141] libmachine: (embed-certs-954581) DBG | SSH cmd err, output: <nil>: 
	I0429 13:51:09.416369  919444 main.go:141] libmachine: (embed-certs-954581) KVM machine creation complete!
	I0429 13:51:09.416791  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetConfigRaw
	I0429 13:51:09.417464  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:09.417757  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:09.417997  919444 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 13:51:09.418014  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:09.419718  919444 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 13:51:09.419739  919444 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 13:51:09.419748  919444 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 13:51:09.419756  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.423098  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.423604  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.423635  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.423879  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.424111  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.424310  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.424454  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.424691  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.424971  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.424991  919444 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 13:51:09.547412  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:51:09.547444  919444 main.go:141] libmachine: Detecting the provisioner...
	I0429 13:51:09.547456  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.550641  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.550917  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.550946  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.551146  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.551336  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.551490  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.551640  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.551900  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.552128  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.552143  919444 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 13:51:09.661024  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 13:51:09.661129  919444 main.go:141] libmachine: found compatible host: buildroot
	I0429 13:51:09.661138  919444 main.go:141] libmachine: Provisioning with buildroot...
	I0429 13:51:09.661148  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:51:09.661426  919444 buildroot.go:166] provisioning hostname "embed-certs-954581"
	I0429 13:51:09.661455  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:51:09.661668  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.664867  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.665322  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.665359  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.665612  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.665823  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.666081  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.666265  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.666480  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.666669  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.666683  919444 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-954581 && echo "embed-certs-954581" | sudo tee /etc/hostname
	I0429 13:51:09.795897  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-954581
	
	I0429 13:51:09.795941  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.798954  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.799330  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.799392  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.799692  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.799960  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.800191  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.800367  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.800575  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.800774  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.800792  919444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-954581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-954581/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-954581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:51:09.929304  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:51:09.929339  919444 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:51:09.929380  919444 buildroot.go:174] setting up certificates
	I0429 13:51:09.929393  919444 provision.go:84] configureAuth start
	I0429 13:51:09.929405  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:51:09.929792  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:09.933601  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.934038  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.934081  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.934469  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.937554  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.937992  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.938024  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.938212  919444 provision.go:143] copyHostCerts
	I0429 13:51:09.938284  919444 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:51:09.938295  919444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:51:09.938357  919444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:51:09.938474  919444 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:51:09.938486  919444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:51:09.938514  919444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:51:09.938615  919444 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:51:09.938636  919444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:51:09.938751  919444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:51:09.938891  919444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.embed-certs-954581 san=[127.0.0.1 192.168.39.231 embed-certs-954581 localhost minikube]
	I0429 13:51:10.036740  919444 provision.go:177] copyRemoteCerts
	I0429 13:51:10.036843  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:51:10.036891  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.040292  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.040658  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.040687  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.040959  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.041193  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.041377  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.041566  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.127491  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:51:10.159759  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:51:10.191155  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 13:51:10.224289  919444 provision.go:87] duration metric: took 294.865013ms to configureAuth
	I0429 13:51:10.224353  919444 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:51:10.224667  919444 config.go:182] Loaded profile config "embed-certs-954581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:10.224962  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.228688  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.229184  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.229226  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.229432  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.229645  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.230130  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.230367  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.230564  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:10.230841  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:10.230869  919444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:51:10.549937  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:51:10.549971  919444 main.go:141] libmachine: Checking connection to Docker...
	I0429 13:51:10.549980  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetURL
	I0429 13:51:10.551487  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Using libvirt version 6000000
	I0429 13:51:10.554365  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.554780  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.554810  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.554952  919444 main.go:141] libmachine: Docker is up and running!
	I0429 13:51:10.554970  919444 main.go:141] libmachine: Reticulating splines...
	I0429 13:51:10.554979  919444 client.go:171] duration metric: took 26.440904795s to LocalClient.Create
	I0429 13:51:10.555005  919444 start.go:167] duration metric: took 26.440979338s to libmachine.API.Create "embed-certs-954581"
	I0429 13:51:10.555015  919444 start.go:293] postStartSetup for "embed-certs-954581" (driver="kvm2")
	I0429 13:51:10.555026  919444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:51:10.555053  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.555298  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:51:10.555317  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.557809  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.558196  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.558261  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.558426  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.558660  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.558873  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.559064  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.646843  919444 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:51:10.652239  919444 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:51:10.652283  919444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:51:10.652363  919444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:51:10.652480  919444 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:51:10.652637  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:51:10.665144  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:51:10.700048  919444 start.go:296] duration metric: took 145.014358ms for postStartSetup
	I0429 13:51:10.700113  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetConfigRaw
	I0429 13:51:10.700806  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:10.704150  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.704546  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.704588  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.704894  919444 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/config.json ...
	I0429 13:51:10.705159  919444 start.go:128] duration metric: took 26.61766148s to createHost
	I0429 13:51:10.705198  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.708341  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.708695  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.708744  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.708907  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.709158  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.709373  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.709535  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.709737  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:10.709975  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:10.710045  919444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:51:10.821101  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714398670.792988724
	
	I0429 13:51:10.821156  919444 fix.go:216] guest clock: 1714398670.792988724
	I0429 13:51:10.821186  919444 fix.go:229] Guest: 2024-04-29 13:51:10.792988724 +0000 UTC Remote: 2024-04-29 13:51:10.705180277 +0000 UTC m=+53.370769379 (delta=87.808447ms)
	I0429 13:51:10.821211  919444 fix.go:200] guest clock delta is within tolerance: 87.808447ms
	I0429 13:51:10.821219  919444 start.go:83] releasing machines lock for "embed-certs-954581", held for 26.733962848s
	I0429 13:51:10.821243  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.821536  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:10.825591  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.826071  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.826110  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.826308  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.827020  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.827253  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.827395  919444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:51:10.827445  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.827581  919444 ssh_runner.go:195] Run: cat /version.json
	I0429 13:51:10.827612  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.831679  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.831980  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.832152  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.832187  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.832370  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.832503  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.832533  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.832692  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.832695  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.832903  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.832909  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.833053  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.833065  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.833222  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.944692  919444 ssh_runner.go:195] Run: systemctl --version
	I0429 13:51:10.953884  919444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:51:11.313854  919444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:51:11.320655  919444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:51:11.320739  919444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:51:11.340243  919444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:51:11.340289  919444 start.go:494] detecting cgroup driver to use...
	I0429 13:51:11.340377  919444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:51:11.358760  919444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:51:11.374970  919444 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:51:11.375060  919444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:51:11.391326  919444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:51:11.407297  919444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:51:11.529914  919444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:51:11.706413  919444 docker.go:233] disabling docker service ...
	I0429 13:51:11.706566  919444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:51:11.727602  919444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:51:11.746989  919444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:51:11.910976  919444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:51:12.055236  919444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:51:12.074265  919444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:51:12.099619  919444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:51:12.099712  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.113800  919444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:51:12.113891  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.126548  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.140772  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.153556  919444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:51:12.166840  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.179767  919444 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.203385  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.215991  919444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:51:12.227481  919444 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 13:51:12.227557  919444 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 13:51:12.242915  919444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:51:12.254164  919444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:12.377997  919444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:51:12.535733  919444 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:51:12.535868  919444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:51:12.541810  919444 start.go:562] Will wait 60s for crictl version
	I0429 13:51:12.541922  919444 ssh_runner.go:195] Run: which crictl
	I0429 13:51:12.546649  919444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:51:12.591748  919444 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:51:12.591845  919444 ssh_runner.go:195] Run: crio --version
	I0429 13:51:12.626052  919444 ssh_runner.go:195] Run: crio --version
	I0429 13:51:12.667631  919444 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:51:11.903127  919134 out.go:204]   - Booting up control plane ...
	I0429 13:51:11.903273  919134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:51:11.903387  919134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:51:11.903603  919134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:51:11.922214  919134 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:51:11.924067  919134 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:51:11.924158  919134 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:51:12.092154  919134 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 13:51:12.092266  919134 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 13:51:13.093743  919134 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001687197s
	I0429 13:51:13.093883  919134 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 13:51:12.669298  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:12.672724  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:12.673072  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:12.673106  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:12.673420  919444 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 13:51:12.678957  919444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:51:12.693111  919444 kubeadm.go:877] updating cluster {Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:51:12.693235  919444 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:51:12.693281  919444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:51:12.729971  919444 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 13:51:12.730046  919444 ssh_runner.go:195] Run: which lz4
	I0429 13:51:12.734550  919444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 13:51:12.739336  919444 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 13:51:12.739396  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 13:51:14.460188  919444 crio.go:462] duration metric: took 1.725660282s to copy over tarball
	I0429 13:51:14.460294  919444 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 13:51:17.150670  919444 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690338757s)
	I0429 13:51:17.150720  919444 crio.go:469] duration metric: took 2.690491973s to extract the tarball
	I0429 13:51:17.150732  919444 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 13:51:17.191432  919444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:51:17.257369  919444 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:51:17.257408  919444 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:51:17.257419  919444 kubeadm.go:928] updating node { 192.168.39.231 8443 v1.30.0 crio true true} ...
	I0429 13:51:17.257577  919444 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-954581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:51:17.257676  919444 ssh_runner.go:195] Run: crio config
	I0429 13:51:17.311855  919444 cni.go:84] Creating CNI manager for ""
	I0429 13:51:17.311898  919444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:17.311914  919444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:51:17.311954  919444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-954581 NodeName:embed-certs-954581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:51:17.312211  919444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-954581"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:51:17.312315  919444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:51:17.324070  919444 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:51:17.324182  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:51:17.336225  919444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 13:51:17.357281  919444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:51:17.377704  919444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 13:51:14.664929  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:51:14.665184  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:51:17.769062  905474 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (25.68064629s)
	I0429 13:51:17.769159  905474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:51:17.794186  905474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:51:17.810436  905474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:51:17.823114  905474 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:51:17.823147  905474 kubeadm.go:156] found existing configuration files:
	
	I0429 13:51:17.823218  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:51:17.836532  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:51:17.836617  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:51:17.850081  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:51:17.864596  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:51:17.864683  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:51:17.878422  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:51:17.890459  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:51:17.890549  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:51:17.902981  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:51:17.915509  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:51:17.915586  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:51:17.928376  905474 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:51:18.001121  905474 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 13:51:18.001214  905474 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:51:18.208783  905474 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:51:18.208956  905474 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:51:18.209083  905474 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:51:18.483982  905474 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:51:18.485806  905474 out.go:204]   - Generating certificates and keys ...
	I0429 13:51:18.485909  905474 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:51:18.485980  905474 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:51:18.486064  905474 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 13:51:18.486138  905474 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 13:51:18.486237  905474 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 13:51:18.486317  905474 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 13:51:18.486402  905474 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 13:51:18.486492  905474 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 13:51:18.486621  905474 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 13:51:18.486725  905474 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 13:51:18.486780  905474 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 13:51:18.486855  905474 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:51:18.572016  905474 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:51:18.683084  905474 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:51:18.854327  905474 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:51:18.916350  905474 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:51:19.037439  905474 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:51:19.038227  905474 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:51:19.045074  905474 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:51:19.596877  919134 kubeadm.go:309] [api-check] The API server is healthy after 6.503508213s
	I0429 13:51:19.615859  919134 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 13:51:19.639427  919134 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 13:51:19.674542  919134 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 13:51:19.674825  919134 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-301942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 13:51:19.689640  919134 kubeadm.go:309] [bootstrap-token] Using token: j7aq9o.yzlr9atiacx5a508
	I0429 13:51:19.691335  919134 out.go:204]   - Configuring RBAC rules ...
	I0429 13:51:19.691513  919134 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 13:51:19.708711  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 13:51:19.720352  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 13:51:19.726432  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 13:51:19.732813  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 13:51:19.738411  919134 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 13:51:20.006172  919134 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 13:51:20.518396  919134 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 13:51:21.006393  919134 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 13:51:21.007739  919134 kubeadm.go:309] 
	I0429 13:51:21.007874  919134 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 13:51:21.007913  919134 kubeadm.go:309] 
	I0429 13:51:21.008018  919134 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 13:51:21.008052  919134 kubeadm.go:309] 
	I0429 13:51:21.008113  919134 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 13:51:21.008295  919134 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 13:51:21.008387  919134 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 13:51:21.008401  919134 kubeadm.go:309] 
	I0429 13:51:21.008508  919134 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 13:51:21.008529  919134 kubeadm.go:309] 
	I0429 13:51:21.008599  919134 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 13:51:21.008614  919134 kubeadm.go:309] 
	I0429 13:51:21.008713  919134 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 13:51:21.008818  919134 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 13:51:21.008924  919134 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 13:51:21.008935  919134 kubeadm.go:309] 
	I0429 13:51:21.009050  919134 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 13:51:21.009156  919134 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 13:51:21.009170  919134 kubeadm.go:309] 
	I0429 13:51:21.009272  919134 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j7aq9o.yzlr9atiacx5a508 \
	I0429 13:51:21.009408  919134 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 13:51:21.009442  919134 kubeadm.go:309] 	--control-plane 
	I0429 13:51:21.009451  919134 kubeadm.go:309] 
	I0429 13:51:21.009553  919134 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 13:51:21.009564  919134 kubeadm.go:309] 
	I0429 13:51:21.009666  919134 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j7aq9o.yzlr9atiacx5a508 \
	I0429 13:51:21.009809  919134 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 13:51:21.011249  919134 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:51:21.011439  919134 cni.go:84] Creating CNI manager for ""
	I0429 13:51:21.011465  919134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:21.013766  919134 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 13:51:17.402878  919444 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0429 13:51:17.415086  919444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:51:17.433461  919444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:17.581402  919444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:17.605047  919444 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581 for IP: 192.168.39.231
	I0429 13:51:17.605084  919444 certs.go:194] generating shared ca certs ...
	I0429 13:51:17.605111  919444 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.605325  919444 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:51:17.605380  919444 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:51:17.605394  919444 certs.go:256] generating profile certs ...
	I0429 13:51:17.605485  919444 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.key
	I0429 13:51:17.605508  919444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.crt with IP's: []
	I0429 13:51:17.758345  919444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.crt ...
	I0429 13:51:17.758389  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.crt: {Name:mk71b88bc301f4fb2764d7260d29f72b66fbde57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.758610  919444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.key ...
	I0429 13:51:17.758629  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.key: {Name:mke19177b6dd30b6b5cfe16b58aebd77cf405023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.758772  919444 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72
	I0429 13:51:17.758799  919444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231]
	I0429 13:51:17.870375  919444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72 ...
	I0429 13:51:17.870434  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72: {Name:mk095bbcf32b9206fd45d75d3fd534fd886deaf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.870704  919444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72 ...
	I0429 13:51:17.870734  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72: {Name:mk0fe99593f3e0fb6fa58e5506657b0a68dedbd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.870868  919444 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt
	I0429 13:51:17.871004  919444 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key
	I0429 13:51:17.871120  919444 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key
	I0429 13:51:17.871147  919444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt with IP's: []
	I0429 13:51:18.157584  919444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt ...
	I0429 13:51:18.157634  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt: {Name:mk8374492e4263beb7a626a1c3df0375394ea85f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:18.157816  919444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key ...
	I0429 13:51:18.157831  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key: {Name:mkada9f5504adca9793df490526a46dec967df9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:18.158011  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:51:18.158050  919444 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:51:18.158062  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:51:18.158087  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:51:18.158110  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:51:18.158133  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:51:18.158172  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:51:18.158763  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:51:18.191100  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:51:18.225913  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:51:18.257976  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:51:18.290013  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 13:51:18.320663  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:51:18.358772  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:51:18.396339  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 13:51:18.430308  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:51:18.462521  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:51:18.506457  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:51:18.554292  919444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:51:18.579419  919444 ssh_runner.go:195] Run: openssl version
	I0429 13:51:18.586461  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:51:18.600965  919444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:51:18.606924  919444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:51:18.607010  919444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:51:18.614494  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:51:18.631116  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:51:18.646180  919444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:18.653216  919444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:18.653419  919444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:18.662380  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:51:18.676780  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:51:18.693706  919444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:51:18.699203  919444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:51:18.699282  919444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:51:18.706196  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:51:18.722437  919444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:51:18.728022  919444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 13:51:18.728098  919444 kubeadm.go:391] StartCluster: {Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:51:18.728213  919444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:51:18.728319  919444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:51:18.791594  919444 cri.go:89] found id: ""
	I0429 13:51:18.791685  919444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:51:18.808197  919444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:51:18.820976  919444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:51:18.834915  919444 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:51:18.834943  919444 kubeadm.go:156] found existing configuration files:
	
	I0429 13:51:18.834993  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:51:18.847276  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:51:18.847394  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:51:18.861110  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:51:18.872859  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:51:18.872936  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:51:18.886991  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:51:18.902096  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:51:18.902189  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:51:18.916831  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:51:18.931350  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:51:18.931461  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:51:18.946903  919444 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:51:19.273238  919444 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:51:19.046953  905474 out.go:204]   - Booting up control plane ...
	I0429 13:51:19.047090  905474 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:51:19.047227  905474 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:51:19.047949  905474 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:51:19.077574  905474 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:51:19.078808  905474 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:51:19.078887  905474 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:51:19.269608  905474 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 13:51:19.269727  905474 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 13:51:20.272144  905474 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002535006s
	I0429 13:51:20.272275  905474 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 13:51:21.015724  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 13:51:21.032514  919134 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 13:51:21.067903  919134 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:51:21.068003  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:21.068084  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-301942 minikube.k8s.io/updated_at=2024_04_29T13_51_21_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=no-preload-301942 minikube.k8s.io/primary=true
	I0429 13:51:21.274389  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:21.274497  919134 ops.go:34] apiserver oom_adj: -16
	I0429 13:51:21.774536  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:22.275433  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:22.774528  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:23.275181  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:23.775112  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:24.274762  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:26.274454  905474 kubeadm.go:309] [api-check] The API server is healthy after 6.002196479s
	I0429 13:51:26.296506  905474 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 13:51:26.319861  905474 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 13:51:26.366225  905474 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 13:51:26.366494  905474 kubeadm.go:309] [mark-control-plane] Marking the node pause-553639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 13:51:26.382944  905474 kubeadm.go:309] [bootstrap-token] Using token: c3k1q5.es7clq6k67f9ra0a
	I0429 13:51:26.384906  905474 out.go:204]   - Configuring RBAC rules ...
	I0429 13:51:26.385060  905474 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 13:51:26.392403  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 13:51:26.403806  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 13:51:26.409563  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 13:51:26.416188  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 13:51:26.425965  905474 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 13:51:26.688482  905474 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 13:51:27.197073  905474 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 13:51:27.688474  905474 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 13:51:27.689675  905474 kubeadm.go:309] 
	I0429 13:51:27.689785  905474 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 13:51:27.689798  905474 kubeadm.go:309] 
	I0429 13:51:27.689893  905474 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 13:51:27.689905  905474 kubeadm.go:309] 
	I0429 13:51:27.689936  905474 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 13:51:27.690015  905474 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 13:51:27.690137  905474 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 13:51:27.690176  905474 kubeadm.go:309] 
	I0429 13:51:27.690282  905474 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 13:51:27.690457  905474 kubeadm.go:309] 
	I0429 13:51:27.690633  905474 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 13:51:27.690666  905474 kubeadm.go:309] 
	I0429 13:51:27.690738  905474 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 13:51:27.690856  905474 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 13:51:27.690945  905474 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 13:51:27.690961  905474 kubeadm.go:309] 
	I0429 13:51:27.691082  905474 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 13:51:27.691207  905474 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 13:51:27.691232  905474 kubeadm.go:309] 
	I0429 13:51:27.691345  905474 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token c3k1q5.es7clq6k67f9ra0a \
	I0429 13:51:27.691538  905474 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 13:51:27.691587  905474 kubeadm.go:309] 	--control-plane 
	I0429 13:51:27.691614  905474 kubeadm.go:309] 
	I0429 13:51:27.691803  905474 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 13:51:27.691824  905474 kubeadm.go:309] 
	I0429 13:51:27.691938  905474 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token c3k1q5.es7clq6k67f9ra0a \
	I0429 13:51:27.692083  905474 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 13:51:27.692479  905474 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:51:27.692512  905474 cni.go:84] Creating CNI manager for ""
	I0429 13:51:27.692542  905474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:27.694910  905474 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 13:51:27.696556  905474 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 13:51:27.715264  905474 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 13:51:27.743581  905474 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:51:27.743699  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:27.743712  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-553639 minikube.k8s.io/updated_at=2024_04_29T13_51_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=pause-553639 minikube.k8s.io/primary=true
	I0429 13:51:27.774964  905474 ops.go:34] apiserver oom_adj: -16
	I0429 13:51:27.947562  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:24.775450  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:25.275416  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:25.775133  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:26.274519  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:26.774594  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:27.275288  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:27.774962  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.274876  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.774516  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.275303  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.501525  919444 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 13:51:31.501636  919444 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:51:31.501753  919444 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:51:31.501898  919444 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:51:31.502025  919444 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:51:31.502127  919444 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:51:31.504128  919444 out.go:204]   - Generating certificates and keys ...
	I0429 13:51:31.504246  919444 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:51:31.504334  919444 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:51:31.504441  919444 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 13:51:31.504524  919444 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 13:51:31.504607  919444 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 13:51:31.504684  919444 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 13:51:31.504759  919444 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 13:51:31.504948  919444 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-954581 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0429 13:51:31.505032  919444 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 13:51:31.505217  919444 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-954581 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0429 13:51:31.505318  919444 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 13:51:31.505379  919444 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 13:51:31.505443  919444 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 13:51:31.505551  919444 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:51:31.505640  919444 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:51:31.505717  919444 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:51:31.505795  919444 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:51:31.505885  919444 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:51:31.505963  919444 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:51:31.506077  919444 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:51:31.506195  919444 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:51:31.507943  919444 out.go:204]   - Booting up control plane ...
	I0429 13:51:31.508080  919444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:51:31.508189  919444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:51:31.508267  919444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:51:31.508376  919444 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:51:31.508521  919444 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:51:31.508601  919444 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:51:31.508780  919444 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 13:51:31.508889  919444 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 13:51:31.508982  919444 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002067963s
	I0429 13:51:31.509109  919444 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 13:51:31.509210  919444 kubeadm.go:309] [api-check] The API server is healthy after 6.003008264s
	I0429 13:51:31.509373  919444 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 13:51:31.509549  919444 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 13:51:31.509618  919444 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 13:51:31.509822  919444 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-954581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 13:51:31.509917  919444 kubeadm.go:309] [bootstrap-token] Using token: lxayf9.a1fyv4yzj0t2zn7h
	I0429 13:51:31.511632  919444 out.go:204]   - Configuring RBAC rules ...
	I0429 13:51:31.511761  919444 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 13:51:31.511885  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 13:51:31.512092  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 13:51:31.512289  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 13:51:31.512428  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 13:51:31.512558  919444 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 13:51:31.512749  919444 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 13:51:31.512828  919444 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 13:51:31.512906  919444 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 13:51:31.512917  919444 kubeadm.go:309] 
	I0429 13:51:31.513004  919444 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 13:51:31.513014  919444 kubeadm.go:309] 
	I0429 13:51:31.513137  919444 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 13:51:31.513147  919444 kubeadm.go:309] 
	I0429 13:51:31.513177  919444 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 13:51:31.513262  919444 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 13:51:31.513307  919444 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 13:51:31.513317  919444 kubeadm.go:309] 
	I0429 13:51:31.513366  919444 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 13:51:31.513374  919444 kubeadm.go:309] 
	I0429 13:51:31.513440  919444 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 13:51:31.513449  919444 kubeadm.go:309] 
	I0429 13:51:31.513512  919444 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 13:51:31.513609  919444 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 13:51:31.513696  919444 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 13:51:31.513710  919444 kubeadm.go:309] 
	I0429 13:51:31.513816  919444 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 13:51:31.513948  919444 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 13:51:31.513963  919444 kubeadm.go:309] 
	I0429 13:51:31.514095  919444 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token lxayf9.a1fyv4yzj0t2zn7h \
	I0429 13:51:31.514189  919444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 13:51:31.514210  919444 kubeadm.go:309] 	--control-plane 
	I0429 13:51:31.514217  919444 kubeadm.go:309] 
	I0429 13:51:31.514300  919444 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 13:51:31.514310  919444 kubeadm.go:309] 
	I0429 13:51:31.514399  919444 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token lxayf9.a1fyv4yzj0t2zn7h \
	I0429 13:51:31.514557  919444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 13:51:31.514571  919444 cni.go:84] Creating CNI manager for ""
	I0429 13:51:31.514579  919444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:31.516453  919444 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 13:51:31.518126  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 13:51:31.532420  919444 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 13:51:31.558750  919444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:51:31.558835  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.558837  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-954581 minikube.k8s.io/updated_at=2024_04_29T13_51_31_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=embed-certs-954581 minikube.k8s.io/primary=true
	I0429 13:51:31.609722  919444 ops.go:34] apiserver oom_adj: -16
	I0429 13:51:31.809368  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.310347  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.448054  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.947685  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.448628  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.947715  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.447986  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.947683  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.448709  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.947778  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.448685  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.948669  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.774752  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.275256  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.775382  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.274923  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.774577  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.274472  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.775106  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.275187  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.775410  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.274789  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.423003  919134 kubeadm.go:1107] duration metric: took 13.355093749s to wait for elevateKubeSystemPrivileges
	W0429 13:51:34.423061  919134 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 13:51:34.423078  919134 kubeadm.go:393] duration metric: took 26.306862708s to StartCluster
	I0429 13:51:34.423104  919134 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:34.423212  919134 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:51:34.424535  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:34.424824  919134 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 13:51:34.424833  919134 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.248 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:51:34.426632  919134 out.go:177] * Verifying Kubernetes components...
	I0429 13:51:34.424915  919134 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:51:34.425071  919134 config.go:182] Loaded profile config "no-preload-301942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:34.428518  919134 addons.go:69] Setting storage-provisioner=true in profile "no-preload-301942"
	I0429 13:51:34.428576  919134 addons.go:234] Setting addon storage-provisioner=true in "no-preload-301942"
	I0429 13:51:34.428526  919134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:34.428628  919134 host.go:66] Checking if "no-preload-301942" exists ...
	I0429 13:51:34.428531  919134 addons.go:69] Setting default-storageclass=true in profile "no-preload-301942"
	I0429 13:51:34.428737  919134 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-301942"
	I0429 13:51:34.429077  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.429108  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.429149  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.429187  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.446327  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0429 13:51:34.446889  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.447594  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.447618  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.447974  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.448234  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:51:34.448341  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0429 13:51:34.448951  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.449597  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.449621  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.450118  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.450662  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.450688  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.452793  919134 addons.go:234] Setting addon default-storageclass=true in "no-preload-301942"
	I0429 13:51:34.452849  919134 host.go:66] Checking if "no-preload-301942" exists ...
	I0429 13:51:34.453255  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.453296  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.469135  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 13:51:34.469513  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0429 13:51:34.469695  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.469929  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.470257  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.470279  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.470666  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.470810  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.470836  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.470869  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:51:34.471242  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.471884  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.471939  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.473170  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:51:34.475579  919134 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:51:34.477063  919134 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:34.477085  919134 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 13:51:34.477110  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:51:34.480514  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.480982  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:51:34.481008  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.481193  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:51:34.481437  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:51:34.481615  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:51:34.481768  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:51:34.490950  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0429 13:51:34.491462  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.492150  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.492170  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.492515  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.492689  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:51:34.494506  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:51:34.494775  919134 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:34.494797  919134 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 13:51:34.494813  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:51:34.498506  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.499044  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:51:34.499079  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.499516  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:51:34.499719  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:51:34.499873  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:51:34.500018  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:51:34.788612  919134 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 13:51:34.788641  919134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:35.097001  919134 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:35.190365  919134 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:35.336195  919134 start.go:946] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0429 13:51:35.337235  919134 node_ready.go:35] waiting up to 6m0s for node "no-preload-301942" to be "Ready" ...
	I0429 13:51:35.353623  919134 node_ready.go:49] node "no-preload-301942" has status "Ready":"True"
	I0429 13:51:35.353653  919134 node_ready.go:38] duration metric: took 16.389597ms for node "no-preload-301942" to be "Ready" ...
	I0429 13:51:35.353663  919134 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:35.378776  919134 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:35.853014  919134 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-301942" context rescaled to 1 replicas
	I0429 13:51:36.367350  919134 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.270294891s)
	I0429 13:51:36.367394  919134 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17695906s)
	I0429 13:51:36.367482  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.367497  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.367512  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.367533  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.367940  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368004  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.367936  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368036  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.368051  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.368063  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.368008  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.367982  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.368023  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.368150  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.368372  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368389  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.368462  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.368531  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368567  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.437325  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.437357  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.437753  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.437804  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.437825  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.439340  919134 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 13:51:32.809617  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.310412  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.809800  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.310197  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.809508  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.309899  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.810305  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.309614  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.809530  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:37.309911  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.448668  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.947980  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.448640  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.948660  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.448573  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.948403  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.448682  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.948302  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:37.448641  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:37.948648  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.441067  919134 addons.go:505] duration metric: took 2.016146811s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 13:51:37.387507  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:38.448662  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:38.947690  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.448636  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.947580  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.448398  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.948349  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.448191  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.547321  905474 kubeadm.go:1107] duration metric: took 13.803696575s to wait for elevateKubeSystemPrivileges
	W0429 13:51:41.547394  905474 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 13:51:41.547408  905474 kubeadm.go:393] duration metric: took 5m10.871748964s to StartCluster
	I0429 13:51:41.547435  905474 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:41.547564  905474 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:51:41.548842  905474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:41.549141  905474 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:51:41.551193  905474 out.go:177] * Verifying Kubernetes components...
	I0429 13:51:41.549260  905474 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:51:41.549431  905474 config.go:182] Loaded profile config "pause-553639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:41.552835  905474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:41.554194  905474 out.go:177] * Enabled addons: 
	I0429 13:51:37.810170  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:38.310105  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:38.809531  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.310243  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.810449  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.310068  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.809489  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.310362  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.810342  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:42.310093  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.555487  905474 addons.go:505] duration metric: took 6.225649ms for enable addons: enabled=[]
	I0429 13:51:41.722347  905474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:41.743056  905474 node_ready.go:35] waiting up to 6m0s for node "pause-553639" to be "Ready" ...
	I0429 13:51:41.753188  905474 node_ready.go:49] node "pause-553639" has status "Ready":"True"
	I0429 13:51:41.753219  905474 node_ready.go:38] duration metric: took 10.125261ms for node "pause-553639" to be "Ready" ...
	I0429 13:51:41.753232  905474 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:41.760239  905474 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:39.392479  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:41.886396  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:43.888050  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:42.810447  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:43.310397  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:43.809871  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:44.309906  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:44.810118  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:45.310204  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:45.434296  919444 kubeadm.go:1107] duration metric: took 13.875520942s to wait for elevateKubeSystemPrivileges
	W0429 13:51:45.434360  919444 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 13:51:45.434375  919444 kubeadm.go:393] duration metric: took 26.706283529s to StartCluster
	I0429 13:51:45.434402  919444 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:45.434523  919444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:51:45.436948  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:45.437337  919444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 13:51:45.437359  919444 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:51:45.441140  919444 out.go:177] * Verifying Kubernetes components...
	I0429 13:51:45.437430  919444 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:51:45.441228  919444 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-954581"
	I0429 13:51:45.437607  919444 config.go:182] Loaded profile config "embed-certs-954581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:45.441292  919444 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-954581"
	I0429 13:51:45.443096  919444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:45.441291  919444 addons.go:69] Setting default-storageclass=true in profile "embed-certs-954581"
	I0429 13:51:45.443218  919444 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-954581"
	I0429 13:51:45.441344  919444 host.go:66] Checking if "embed-certs-954581" exists ...
	I0429 13:51:45.443715  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.443737  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.443764  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.443770  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.461955  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0429 13:51:45.462026  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0429 13:51:45.462546  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.462599  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.463314  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.463336  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.463472  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.463499  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.463744  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.463861  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.463941  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:45.464511  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.464574  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.467569  919444 addons.go:234] Setting addon default-storageclass=true in "embed-certs-954581"
	I0429 13:51:45.467620  919444 host.go:66] Checking if "embed-certs-954581" exists ...
	I0429 13:51:45.467920  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.467975  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.482575  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0429 13:51:45.483099  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.483895  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.483920  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.484329  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.484566  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:45.487017  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:45.489722  919444 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:51:45.488682  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0429 13:51:43.769055  905474 pod_ready.go:102] pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:44.769600  905474 pod_ready.go:92] pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.769632  905474 pod_ready.go:81] duration metric: took 3.009350313s for pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.769651  905474 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xfhkh" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.776879  905474 pod_ready.go:92] pod "coredns-7db6d8ff4d-xfhkh" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.776908  905474 pod_ready.go:81] duration metric: took 7.248056ms for pod "coredns-7db6d8ff4d-xfhkh" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.776922  905474 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.783573  905474 pod_ready.go:92] pod "etcd-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.783608  905474 pod_ready.go:81] duration metric: took 6.675529ms for pod "etcd-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.783624  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.790571  905474 pod_ready.go:92] pod "kube-apiserver-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.790600  905474 pod_ready.go:81] duration metric: took 6.968174ms for pod "kube-apiserver-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.790611  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.796134  905474 pod_ready.go:92] pod "kube-controller-manager-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.796166  905474 pod_ready.go:81] duration metric: took 5.547308ms for pod "kube-controller-manager-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.796178  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lchdx" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.165718  905474 pod_ready.go:92] pod "kube-proxy-lchdx" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:45.165775  905474 pod_ready.go:81] duration metric: took 369.588708ms for pod "kube-proxy-lchdx" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.165809  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.565232  905474 pod_ready.go:92] pod "kube-scheduler-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:45.565268  905474 pod_ready.go:81] duration metric: took 399.442597ms for pod "kube-scheduler-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.565280  905474 pod_ready.go:38] duration metric: took 3.812034288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:45.565301  905474 api_server.go:52] waiting for apiserver process to appear ...
	I0429 13:51:45.565375  905474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:51:45.583769  905474 api_server.go:72] duration metric: took 4.034582991s to wait for apiserver process to appear ...
	I0429 13:51:45.583811  905474 api_server.go:88] waiting for apiserver healthz status ...
	I0429 13:51:45.583842  905474 api_server.go:253] Checking apiserver healthz at https://192.168.61.170:8443/healthz ...
	I0429 13:51:45.591464  905474 api_server.go:279] https://192.168.61.170:8443/healthz returned 200:
	ok
	I0429 13:51:45.594229  905474 api_server.go:141] control plane version: v1.30.0
	I0429 13:51:45.594266  905474 api_server.go:131] duration metric: took 10.445929ms to wait for apiserver health ...
	I0429 13:51:45.594278  905474 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 13:51:45.769050  905474 system_pods.go:59] 7 kube-system pods found
	I0429 13:51:45.769102  905474 system_pods.go:61] "coredns-7db6d8ff4d-qbcb2" [ae828405-af7f-4d81-89db-04f5a8b615b8] Running
	I0429 13:51:45.769109  905474 system_pods.go:61] "coredns-7db6d8ff4d-xfhkh" [0b51d117-6754-4d5a-8191-6376818cd778] Running
	I0429 13:51:45.769114  905474 system_pods.go:61] "etcd-pause-553639" [f60f7ca2-3a92-4c8c-86c2-cc639343a932] Running
	I0429 13:51:45.769121  905474 system_pods.go:61] "kube-apiserver-pause-553639" [9e8996af-54c4-4db4-a620-40962d99808a] Running
	I0429 13:51:45.769127  905474 system_pods.go:61] "kube-controller-manager-pause-553639" [ba478e54-3f9c-425b-83cd-1ca2bddfe039] Running
	I0429 13:51:45.769136  905474 system_pods.go:61] "kube-proxy-lchdx" [b56b310e-2281-4ff2-a3c1-9c6d3e340464] Running
	I0429 13:51:45.769141  905474 system_pods.go:61] "kube-scheduler-pause-553639" [828d0a42-fc45-450d-8745-8750c45fac94] Running
	I0429 13:51:45.769150  905474 system_pods.go:74] duration metric: took 174.863248ms to wait for pod list to return data ...
	I0429 13:51:45.769161  905474 default_sa.go:34] waiting for default service account to be created ...
	I0429 13:51:45.965156  905474 default_sa.go:45] found service account: "default"
	I0429 13:51:45.965210  905474 default_sa.go:55] duration metric: took 196.039407ms for default service account to be created ...
	I0429 13:51:45.965226  905474 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 13:51:46.168792  905474 system_pods.go:86] 7 kube-system pods found
	I0429 13:51:46.168831  905474 system_pods.go:89] "coredns-7db6d8ff4d-qbcb2" [ae828405-af7f-4d81-89db-04f5a8b615b8] Running
	I0429 13:51:46.168839  905474 system_pods.go:89] "coredns-7db6d8ff4d-xfhkh" [0b51d117-6754-4d5a-8191-6376818cd778] Running
	I0429 13:51:46.168845  905474 system_pods.go:89] "etcd-pause-553639" [f60f7ca2-3a92-4c8c-86c2-cc639343a932] Running
	I0429 13:51:46.168851  905474 system_pods.go:89] "kube-apiserver-pause-553639" [9e8996af-54c4-4db4-a620-40962d99808a] Running
	I0429 13:51:46.168856  905474 system_pods.go:89] "kube-controller-manager-pause-553639" [ba478e54-3f9c-425b-83cd-1ca2bddfe039] Running
	I0429 13:51:46.168861  905474 system_pods.go:89] "kube-proxy-lchdx" [b56b310e-2281-4ff2-a3c1-9c6d3e340464] Running
	I0429 13:51:46.168881  905474 system_pods.go:89] "kube-scheduler-pause-553639" [828d0a42-fc45-450d-8745-8750c45fac94] Running
	I0429 13:51:46.168891  905474 system_pods.go:126] duration metric: took 203.656456ms to wait for k8s-apps to be running ...
	I0429 13:51:46.168906  905474 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 13:51:46.168962  905474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:51:46.185863  905474 system_svc.go:56] duration metric: took 16.943118ms WaitForService to wait for kubelet
	I0429 13:51:46.185907  905474 kubeadm.go:576] duration metric: took 4.636729164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:51:46.185935  905474 node_conditions.go:102] verifying NodePressure condition ...
	I0429 13:51:46.366600  905474 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:51:46.366629  905474 node_conditions.go:123] node cpu capacity is 2
	I0429 13:51:46.366641  905474 node_conditions.go:105] duration metric: took 180.700199ms to run NodePressure ...
	I0429 13:51:46.366653  905474 start.go:240] waiting for startup goroutines ...
	I0429 13:51:46.366660  905474 start.go:245] waiting for cluster config update ...
	I0429 13:51:46.366667  905474 start.go:254] writing updated cluster config ...
	I0429 13:51:46.367051  905474 ssh_runner.go:195] Run: rm -f paused
	I0429 13:51:46.438349  905474 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 13:51:46.440724  905474 out.go:177] * Done! kubectl is now configured to use "pause-553639" cluster and "default" namespace by default
	I0429 13:51:45.490382  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.491519  919444 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:45.491639  919444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 13:51:45.491775  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:45.492243  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.492268  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.492664  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.493587  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.493624  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.496063  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.496514  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:45.496538  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.496850  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:45.497963  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:45.498174  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:45.498342  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:45.510997  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0429 13:51:45.511511  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.512030  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.512047  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.512480  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.512679  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:45.514510  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:45.514841  919444 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:45.514856  919444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 13:51:45.514873  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:45.518364  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.518878  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:45.518912  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.519289  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:45.519542  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:45.519740  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:45.519895  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:45.736886  919444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:45.736940  919444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 13:51:45.814101  919444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:45.990224  919444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:46.406270  919444 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 13:51:46.406460  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.406488  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.407021  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.407043  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.407055  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.407064  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.407070  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.407401  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.407425  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.407450  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.407890  919444 node_ready.go:35] waiting up to 6m0s for node "embed-certs-954581" to be "Ready" ...
	I0429 13:51:46.436792  919444 node_ready.go:49] node "embed-certs-954581" has status "Ready":"True"
	I0429 13:51:46.436826  919444 node_ready.go:38] duration metric: took 28.899422ms for node "embed-certs-954581" to be "Ready" ...
	I0429 13:51:46.436838  919444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:46.436970  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.436995  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.437320  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.437339  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.437343  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.467842  919444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4vstk" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:46.798195  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.798232  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.798714  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.798767  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.798777  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.798786  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.798795  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.799134  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.799196  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.799203  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.801544  919444 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.362632434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398707362592779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e275b772-289e-426f-bbaa-8a9496abebf1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.364099689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a48f021-d09e-4ee5-a66d-da4fbe8fd440 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.364222045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a48f021-d09e-4ee5-a66d-da4fbe8fd440 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.364509741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a48f021-d09e-4ee5-a66d-da4fbe8fd440 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.419251875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddfc624e-8b9c-4e14-bfa3-2ba03b2f29c4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.419401212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddfc624e-8b9c-4e14-bfa3-2ba03b2f29c4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.421413654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87cdc38e-7fbf-4879-8979-99e09f3d6bee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.422034740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398707421924487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87cdc38e-7fbf-4879-8979-99e09f3d6bee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.422752962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c245f3b3-a590-4ccd-9663-f36359874e24 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.422830083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c245f3b3-a590-4ccd-9663-f36359874e24 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.423179225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c245f3b3-a590-4ccd-9663-f36359874e24 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.472525846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca927da2-85c6-437c-a7d7-ef3b3284b9f4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.472674853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca927da2-85c6-437c-a7d7-ef3b3284b9f4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.474889296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae77d3f5-efd8-49a4-9926-55573411253e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.475674167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398707475632550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae77d3f5-efd8-49a4-9926-55573411253e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.476628211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=022104ff-6668-42db-bb1d-bc734f93e081 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.476733577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=022104ff-6668-42db-bb1d-bc734f93e081 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.477058916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=022104ff-6668-42db-bb1d-bc734f93e081 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.528136680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c9d6483-2f1c-4c2d-982b-4aebd811eefe name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.528304829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c9d6483-2f1c-4c2d-982b-4aebd811eefe name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.532039660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fab4d30-f070-4367-9d16-4d8c9824c91f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.532675777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398707532633725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fab4d30-f070-4367-9d16-4d8c9824c91f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.533690925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f577e10-3166-420d-8728-efc999e8cf70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.533796061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f577e10-3166-420d-8728-efc999e8cf70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:47 pause-553639 crio[3058]: time="2024-04-29 13:51:47.534373097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f577e10-3166-420d-8728-efc999e8cf70 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6ffe22d03ad3a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   0                   85323976f623d       coredns-7db6d8ff4d-xfhkh
	71ad796050725       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   0                   09bf5ff189180       coredns-7db6d8ff4d-qbcb2
	fdbe40c1373bf       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   4 seconds ago       Running             kube-proxy                0                   bc3189a6510d4       kube-proxy-lchdx
	483c0763a8a4c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago      Running             etcd                      4                   d20c8ff3d81d1       etcd-pause-553639
	3699f70dd6052       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   26 seconds ago      Running             kube-apiserver            4                   0928a1cb46ea1       kube-apiserver-pause-553639
	5bf11cf3703a3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   26 seconds ago      Running             kube-controller-manager   4                   035a0cfb63e65       kube-controller-manager-pause-553639
	a625f9cc9cb85       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   26 seconds ago      Running             kube-scheduler            4                   f385f6f19141b       kube-scheduler-pause-553639
	
	
	==> coredns [6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               pause-553639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-553639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=pause-553639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T13_51_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-553639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:51:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.170
	  Hostname:    pause-553639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f6180b70f984fc4bc96ef6adfcc4408
	  System UUID:                8f6180b7-0f98-4fc4-bc96-ef6adfcc4408
	  Boot ID:                    2fcfde3b-ce03-4523-81c1-289c7d65deb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-qbcb2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6s
	  kube-system                 coredns-7db6d8ff4d-xfhkh                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6s
	  kube-system                 etcd-pause-553639                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         20s
	  kube-system                 kube-apiserver-pause-553639             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-controller-manager-pause-553639    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-proxy-lchdx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kube-scheduler-pause-553639             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-553639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-553639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-553639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s                kubelet          Node pause-553639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s                kubelet          Node pause-553639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s                kubelet          Node pause-553639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-553639 event: Registered Node pause-553639 in Controller
	
	
	==> dmesg <==
	[Apr29 13:44] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.071297] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.281171] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.843466] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.233629] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.082079] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.533717] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.013143] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +8.000756] kauditd_printk_skb: 90 callbacks suppressed
	[ +20.447397] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.205994] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.324017] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.228518] systemd-fstab-generator[2885]: Ignoring "noauto" option for root device
	[  +0.424669] systemd-fstab-generator[2914]: Ignoring "noauto" option for root device
	[Apr29 13:46] systemd-fstab-generator[3170]: Ignoring "noauto" option for root device
	[  +0.106938] kauditd_printk_skb: 174 callbacks suppressed
	[  +5.924555] kauditd_printk_skb: 86 callbacks suppressed
	[  +2.786839] systemd-fstab-generator[3896]: Ignoring "noauto" option for root device
	[Apr29 13:50] kauditd_printk_skb: 45 callbacks suppressed
	[Apr29 13:51] systemd-fstab-generator[5629]: Ignoring "noauto" option for root device
	[  +1.585094] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.033710] systemd-fstab-generator[5962]: Ignoring "noauto" option for root device
	[  +0.097060] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.744731] systemd-fstab-generator[6170]: Ignoring "noauto" option for root device
	[  +0.098887] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493] <==
	{"level":"info","ts":"2024-04-29T13:51:21.261937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 switched to configuration voters=(4446367452146456582)"}
	{"level":"info","ts":"2024-04-29T13:51:21.270175Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"43b9305115fb250d","local-member-id":"3db4aba3ce0c5806","added-peer-id":"3db4aba3ce0c5806","added-peer-peer-urls":["https://192.168.61.170:2380"]}
	{"level":"info","ts":"2024-04-29T13:51:21.326433Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:51:21.326741Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3db4aba3ce0c5806","initial-advertise-peer-urls":["https://192.168.61.170:2380"],"listen-peer-urls":["https://192.168.61.170:2380"],"advertise-client-urls":["https://192.168.61.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:51:21.32679Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:51:21.327062Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.170:2380"}
	{"level":"info","ts":"2024-04-29T13:51:21.327098Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.170:2380"}
	{"level":"info","ts":"2024-04-29T13:51:21.509056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T13:51:21.509119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T13:51:21.509162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 received MsgPreVoteResp from 3db4aba3ce0c5806 at term 1"}
	{"level":"info","ts":"2024-04-29T13:51:21.509176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.509182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 received MsgVoteResp from 3db4aba3ce0c5806 at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.50919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.509197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3db4aba3ce0c5806 elected leader 3db4aba3ce0c5806 at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.515141Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:51:21.517475Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3db4aba3ce0c5806","local-member-attributes":"{Name:pause-553639 ClientURLs:[https://192.168.61.170:2379]}","request-path":"/0/members/3db4aba3ce0c5806/attributes","cluster-id":"43b9305115fb250d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:51:21.517773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:51:21.522271Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:51:21.533056Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:51:21.533182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:51:21.543752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T13:51:21.552491Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.170:2379"}
	{"level":"info","ts":"2024-04-29T13:51:21.544244Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"43b9305115fb250d","local-member-id":"3db4aba3ce0c5806","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:51:21.561281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:51:21.561428Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 13:51:48 up 8 min,  0 users,  load average: 0.92, 0.57, 0.31
	Linux pause-553639 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e] <==
	I0429 13:51:23.999377       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 13:51:23.999558       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 13:51:24.000035       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 13:51:24.037353       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0429 13:51:24.059892       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 13:51:24.064958       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 13:51:24.065043       1 policy_source.go:224] refreshing policies
	E0429 13:51:24.092899       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0429 13:51:24.106634       1 controller.go:615] quota admission added evaluator for: namespaces
	I0429 13:51:24.298823       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 13:51:24.908564       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 13:51:24.921329       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 13:51:24.921429       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 13:51:25.901598       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 13:51:25.979912       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 13:51:26.132027       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 13:51:26.142198       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.170]
	I0429 13:51:26.148791       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 13:51:26.157608       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 13:51:26.957125       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 13:51:27.118752       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 13:51:27.173028       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 13:51:27.208396       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 13:51:40.906632       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 13:51:41.152248       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a] <==
	I0429 13:51:40.242621       1 shared_informer.go:320] Caches are synced for PV protection
	I0429 13:51:40.246427       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 13:51:40.247267       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0429 13:51:40.249329       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0429 13:51:40.249412       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0429 13:51:40.249439       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0429 13:51:40.249474       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0429 13:51:40.297660       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 13:51:40.306318       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 13:51:40.312232       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 13:51:40.347449       1 shared_informer.go:320] Caches are synced for disruption
	I0429 13:51:40.370483       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 13:51:40.404273       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 13:51:40.832713       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:51:40.833208       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 13:51:40.877292       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:51:41.358129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="441.597162ms"
	I0429 13:51:41.376956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.63227ms"
	I0429 13:51:41.403704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.601746ms"
	I0429 13:51:41.403926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.083µs"
	I0429 13:51:44.332268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="381.254µs"
	I0429 13:51:44.436571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.518258ms"
	I0429 13:51:44.436863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130.409µs"
	I0429 13:51:44.468473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.68009ms"
	I0429 13:51:44.469062       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="171.823µs"
	
	
	==> kube-proxy [fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c] <==
	I0429 13:51:43.914784       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:51:43.929056       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.170"]
	I0429 13:51:43.985502       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:51:43.985567       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:51:43.985587       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:51:43.990945       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:51:43.991847       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:51:43.991887       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:51:43.993819       1 config.go:192] "Starting service config controller"
	I0429 13:51:43.993866       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:51:43.993907       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:51:43.996407       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:51:43.998835       1 config.go:319] "Starting node config controller"
	I0429 13:51:43.998908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:51:44.094628       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:51:44.097041       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:51:44.099434       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8] <==
	W0429 13:51:24.891932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 13:51:24.892082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 13:51:24.952649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 13:51:24.952769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 13:51:24.980175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 13:51:24.980319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 13:51:25.048071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 13:51:25.048202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 13:51:25.282635       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 13:51:25.282885       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:51:25.305089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 13:51:25.305223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 13:51:25.381775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 13:51:25.381835       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 13:51:25.386168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 13:51:25.386232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 13:51:25.444504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 13:51:25.444567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 13:51:25.469235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 13:51:25.469304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 13:51:25.469260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 13:51:25.469490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 13:51:25.493557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 13:51:25.493699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0429 13:51:27.489174       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:51:41 pause-553639 kubelet[5969]: E0429 13:51:41.191497    5969 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-553639" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-553639' and this object
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277756    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-proxy\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277815    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp64h\" (UniqueName: \"kubernetes.io/projected/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-api-access-vp64h\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277833    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b56b310e-2281-4ff2-a3c1-9c6d3e340464-xtables-lock\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277850    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b56b310e-2281-4ff2-a3c1-9c6d3e340464-lib-modules\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.328625    5969 topology_manager.go:215] "Topology Admit Handler" podUID="0b51d117-6754-4d5a-8191-6376818cd778" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xfhkh"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.353685    5969 topology_manager.go:215] "Topology Admit Handler" podUID="ae828405-af7f-4d81-89db-04f5a8b615b8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qbcb2"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378511    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b51d117-6754-4d5a-8191-6376818cd778-config-volume\") pod \"coredns-7db6d8ff4d-xfhkh\" (UID: \"0b51d117-6754-4d5a-8191-6376818cd778\") " pod="kube-system/coredns-7db6d8ff4d-xfhkh"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378605    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae828405-af7f-4d81-89db-04f5a8b615b8-config-volume\") pod \"coredns-7db6d8ff4d-qbcb2\" (UID: \"ae828405-af7f-4d81-89db-04f5a8b615b8\") " pod="kube-system/coredns-7db6d8ff4d-qbcb2"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378653    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqnx\" (UniqueName: \"kubernetes.io/projected/0b51d117-6754-4d5a-8191-6376818cd778-kube-api-access-crqnx\") pod \"coredns-7db6d8ff4d-xfhkh\" (UID: \"0b51d117-6754-4d5a-8191-6376818cd778\") " pod="kube-system/coredns-7db6d8ff4d-xfhkh"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378704    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2sxb\" (UniqueName: \"kubernetes.io/projected/ae828405-af7f-4d81-89db-04f5a8b615b8-kube-api-access-h2sxb\") pod \"coredns-7db6d8ff4d-qbcb2\" (UID: \"ae828405-af7f-4d81-89db-04f5a8b615b8\") " pod="kube-system/coredns-7db6d8ff4d-qbcb2"
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.396253    5969 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.396333    5969 projected.go:200] Error preparing data for projected volume kube-api-access-vp64h for pod kube-system/kube-proxy-lchdx: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.396442    5969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-api-access-vp64h podName:b56b310e-2281-4ff2-a3c1-9c6d3e340464 nodeName:}" failed. No retries permitted until 2024-04-29 13:51:42.896402969 +0000 UTC m=+15.993990662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vp64h" (UniqueName: "kubernetes.io/projected/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-api-access-vp64h") pod "kube-proxy-lchdx" (UID: "b56b310e-2281-4ff2-a3c1-9c6d3e340464") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499373    5969 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499441    5969 projected.go:200] Error preparing data for projected volume kube-api-access-crqnx for pod kube-system/coredns-7db6d8ff4d-xfhkh: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499536    5969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b51d117-6754-4d5a-8191-6376818cd778-kube-api-access-crqnx podName:0b51d117-6754-4d5a-8191-6376818cd778 nodeName:}" failed. No retries permitted until 2024-04-29 13:51:42.999508328 +0000 UTC m=+16.097096022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-crqnx" (UniqueName: "kubernetes.io/projected/0b51d117-6754-4d5a-8191-6376818cd778-kube-api-access-crqnx") pod "coredns-7db6d8ff4d-xfhkh" (UID: "0b51d117-6754-4d5a-8191-6376818cd778") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499661    5969 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499693    5969 projected.go:200] Error preparing data for projected volume kube-api-access-h2sxb for pod kube-system/coredns-7db6d8ff4d-qbcb2: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499721    5969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae828405-af7f-4d81-89db-04f5a8b615b8-kube-api-access-h2sxb podName:ae828405-af7f-4d81-89db-04f5a8b615b8 nodeName:}" failed. No retries permitted until 2024-04-29 13:51:42.999710992 +0000 UTC m=+16.097298697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h2sxb" (UniqueName: "kubernetes.io/projected/ae828405-af7f-4d81-89db-04f5a8b615b8-kube-api-access-h2sxb") pod "coredns-7db6d8ff4d-qbcb2" (UID: "ae828405-af7f-4d81-89db-04f5a8b615b8") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:44 pause-553639 kubelet[5969]: I0429 13:51:44.354328    5969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qbcb2" podStartSLOduration=3.354283051 podStartE2EDuration="3.354283051s" podCreationTimestamp="2024-04-29 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:51:44.330090197 +0000 UTC m=+17.427677907" watchObservedRunningTime="2024-04-29 13:51:44.354283051 +0000 UTC m=+17.451870756"
	Apr 29 13:51:44 pause-553639 kubelet[5969]: I0429 13:51:44.355654    5969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lchdx" podStartSLOduration=3.355634475 podStartE2EDuration="3.355634475s" podCreationTimestamp="2024-04-29 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:51:44.355012533 +0000 UTC m=+17.452600243" watchObservedRunningTime="2024-04-29 13:51:44.355634475 +0000 UTC m=+17.453222179"
	Apr 29 13:51:44 pause-553639 kubelet[5969]: I0429 13:51:44.449381    5969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xfhkh" podStartSLOduration=3.4492777009999998 podStartE2EDuration="3.449277701s" podCreationTimestamp="2024-04-29 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:51:44.407183669 +0000 UTC m=+17.504771381" watchObservedRunningTime="2024-04-29 13:51:44.449277701 +0000 UTC m=+17.546865414"
	Apr 29 13:51:47 pause-553639 kubelet[5969]: I0429 13:51:47.593383    5969 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 13:51:47 pause-553639 kubelet[5969]: I0429 13:51:47.594835    5969 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-553639 -n pause-553639
helpers_test.go:261: (dbg) Run:  kubectl --context pause-553639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-553639 -n pause-553639
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-553639 logs -n 25
E0429 13:51:49.382829  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/kindnet-807154/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-553639 logs -n 25: (1.380485076s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-807154 sudo cat                           | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | systemctl status cri-docker                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo cat                           | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/containerd/config.toml                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat cri-docker                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo                               | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | containerd config dump                               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo                               | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl status crio --all                          |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo                               | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat crio --no-pager                        |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | cri-dockerd --version                                |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo find                          | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                    |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | systemctl status containerd                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p flannel-807154 sudo crio                          | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | config                                               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat containerd                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| delete  | -p flannel-807154                                    | flannel-807154     | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo cat                            | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/containerd/config.toml                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | containerd config dump                               |                    |         |         |                     |                     |
	| start   | -p no-preload-301942                                 | no-preload-301942  | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | --memory=2200                                        |                    |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                    |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                    |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                    |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl status crio --all                          |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo                                | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | systemctl cat crio --no-pager                        |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo find                           | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                    |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                    |         |         |                     |                     |
	| ssh     | -p bridge-807154 sudo crio                           | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	|         | config                                               |                    |         |         |                     |                     |
	| delete  | -p bridge-807154                                     | bridge-807154      | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC | 29 Apr 24 13:50 UTC |
	| start   | -p embed-certs-954581                                | embed-certs-954581 | jenkins | v1.33.0 | 29 Apr 24 13:50 UTC |                     |
	|         | --memory=2200                                        |                    |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                    |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                          |                    |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                    |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                    |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 13:50:17
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 13:50:17.389789  919444 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:50:17.390118  919444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:50:17.390130  919444 out.go:304] Setting ErrFile to fd 2...
	I0429 13:50:17.390134  919444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:50:17.390322  919444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:50:17.390998  919444 out.go:298] Setting JSON to false
	I0429 13:50:17.392501  919444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":81162,"bootTime":1714317455,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:50:17.392587  919444 start.go:139] virtualization: kvm guest
	I0429 13:50:17.395252  919444 out.go:177] * [embed-certs-954581] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:50:17.397116  919444 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 13:50:17.397167  919444 notify.go:220] Checking for updates...
	I0429 13:50:17.398441  919444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:50:17.399943  919444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:50:17.401511  919444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:17.402901  919444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:50:17.404310  919444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:50:17.406105  919444 config.go:182] Loaded profile config "no-preload-301942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:50:17.406221  919444 config.go:182] Loaded profile config "old-k8s-version-856849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 13:50:17.406347  919444 config.go:182] Loaded profile config "pause-553639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:50:17.406497  919444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:50:17.451856  919444 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 13:50:17.453497  919444 start.go:297] selected driver: kvm2
	I0429 13:50:17.453523  919444 start.go:901] validating driver "kvm2" against <nil>
	I0429 13:50:17.453543  919444 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:50:17.454659  919444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:17.454786  919444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 13:50:17.474193  919444 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 13:50:17.474275  919444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 13:50:17.474511  919444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:50:17.474586  919444 cni.go:84] Creating CNI manager for ""
	I0429 13:50:17.474603  919444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:50:17.474614  919444 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 13:50:17.474747  919444 start.go:340] cluster config:
	{Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:50:17.474937  919444 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:17.477339  919444 out.go:177] * Starting "embed-certs-954581" primary control-plane node in "embed-certs-954581" cluster
	I0429 13:50:14.094483  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:16.097414  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:14.395479  919134 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 13:50:14.392865  919134 cache.go:107] acquiring lock: {Name:mk9033ac27572f6bdd2f91b1761afa042faa357b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:14.393309  919134 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 13:50:14.395659  919134 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:14.395664  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:50:14.392805  919134 cache.go:107] acquiring lock: {Name:mk227d98603ff8c1cd8ccb99c0467815135844cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:50:14.395707  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:50:14.393010  919134 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:14.394826  919134 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:14.394868  919134 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:14.394975  919134 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:14.395764  919134 cache.go:115] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0429 13:50:14.395997  919134 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.199971ms
	I0429 13:50:14.396020  919134 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0429 13:50:14.394830  919134 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:14.396807  919134 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:14.397009  919134 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:14.397086  919134 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 13:50:14.415706  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I0429 13:50:14.416313  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:50:14.416887  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:50:14.416918  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:50:14.417306  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:50:14.417540  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:14.417705  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:14.417887  919134 start.go:159] libmachine.API.Create for "no-preload-301942" (driver="kvm2")
	I0429 13:50:14.417918  919134 client.go:168] LocalClient.Create starting
	I0429 13:50:14.417957  919134 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 13:50:14.418002  919134 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:14.418018  919134 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:14.418075  919134 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 13:50:14.418093  919134 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:14.418104  919134 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:14.418125  919134 main.go:141] libmachine: Running pre-create checks...
	I0429 13:50:14.418135  919134 main.go:141] libmachine: (no-preload-301942) Calling .PreCreateCheck
	I0429 13:50:14.418504  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetConfigRaw
	I0429 13:50:14.419085  919134 main.go:141] libmachine: Creating machine...
	I0429 13:50:14.419101  919134 main.go:141] libmachine: (no-preload-301942) Calling .Create
	I0429 13:50:14.419211  919134 main.go:141] libmachine: (no-preload-301942) Creating KVM machine...
	I0429 13:50:14.420712  919134 main.go:141] libmachine: (no-preload-301942) DBG | found existing default KVM network
	I0429 13:50:14.422099  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.421928  919169 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c3:05:e2} reservation:<nil>}
	I0429 13:50:14.423249  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.423129  919169 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9b:dc:77} reservation:<nil>}
	I0429 13:50:14.424169  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.424100  919169 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:cf:a1:bd} reservation:<nil>}
	I0429 13:50:14.425375  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.425301  919169 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000309200}
	I0429 13:50:14.425431  919134 main.go:141] libmachine: (no-preload-301942) DBG | created network xml: 
	I0429 13:50:14.425470  919134 main.go:141] libmachine: (no-preload-301942) DBG | <network>
	I0429 13:50:14.425485  919134 main.go:141] libmachine: (no-preload-301942) DBG |   <name>mk-no-preload-301942</name>
	I0429 13:50:14.425493  919134 main.go:141] libmachine: (no-preload-301942) DBG |   <dns enable='no'/>
	I0429 13:50:14.425505  919134 main.go:141] libmachine: (no-preload-301942) DBG |   
	I0429 13:50:14.425517  919134 main.go:141] libmachine: (no-preload-301942) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 13:50:14.425530  919134 main.go:141] libmachine: (no-preload-301942) DBG |     <dhcp>
	I0429 13:50:14.425539  919134 main.go:141] libmachine: (no-preload-301942) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 13:50:14.425553  919134 main.go:141] libmachine: (no-preload-301942) DBG |     </dhcp>
	I0429 13:50:14.425563  919134 main.go:141] libmachine: (no-preload-301942) DBG |   </ip>
	I0429 13:50:14.425571  919134 main.go:141] libmachine: (no-preload-301942) DBG |   
	I0429 13:50:14.425589  919134 main.go:141] libmachine: (no-preload-301942) DBG | </network>
	I0429 13:50:14.425632  919134 main.go:141] libmachine: (no-preload-301942) DBG | 
	I0429 13:50:14.432239  919134 main.go:141] libmachine: (no-preload-301942) DBG | trying to create private KVM network mk-no-preload-301942 192.168.72.0/24...
	I0429 13:50:14.541454  919134 main.go:141] libmachine: (no-preload-301942) DBG | private KVM network mk-no-preload-301942 192.168.72.0/24 created
	I0429 13:50:14.541495  919134 main.go:141] libmachine: (no-preload-301942) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942 ...
	I0429 13:50:14.541521  919134 main.go:141] libmachine: (no-preload-301942) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 13:50:14.541582  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.541455  919169 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:14.541635  919134 main.go:141] libmachine: (no-preload-301942) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 13:50:14.567653  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0429 13:50:14.567670  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 13:50:14.569332  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 13:50:14.593701  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 13:50:14.600179  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 13:50:14.603724  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 13:50:14.631182  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0429 13:50:14.631214  919134 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 238.360676ms
	I0429 13:50:14.631229  919134 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0429 13:50:14.661666  919134 cache.go:162] opening:  /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 13:50:14.846909  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:14.846745  919169 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa...
	I0429 13:50:15.033421  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0429 13:50:15.033456  919134 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0" took 640.514669ms
	I0429 13:50:15.033472  919134 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0429 13:50:15.095063  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:15.094934  919169 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/no-preload-301942.rawdisk...
	I0429 13:50:15.095095  919134 main.go:141] libmachine: (no-preload-301942) DBG | Writing magic tar header
	I0429 13:50:15.095131  919134 main.go:141] libmachine: (no-preload-301942) DBG | Writing SSH key tar header
	I0429 13:50:15.095197  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:15.095148  919169 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942 ...
	I0429 13:50:15.095395  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942
	I0429 13:50:15.095428  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 13:50:15.095442  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942 (perms=drwx------)
	I0429 13:50:15.095464  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 13:50:15.095479  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 13:50:15.095495  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 13:50:15.095510  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:15.095528  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 13:50:15.095539  919134 main.go:141] libmachine: (no-preload-301942) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 13:50:15.095549  919134 main.go:141] libmachine: (no-preload-301942) Creating domain...
	I0429 13:50:15.095563  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 13:50:15.095576  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 13:50:15.095592  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home/jenkins
	I0429 13:50:15.095606  919134 main.go:141] libmachine: (no-preload-301942) DBG | Checking permissions on dir: /home
	I0429 13:50:15.095623  919134 main.go:141] libmachine: (no-preload-301942) DBG | Skipping /home - not owner
	I0429 13:50:15.097041  919134 main.go:141] libmachine: (no-preload-301942) define libvirt domain using xml: 
	I0429 13:50:15.097467  919134 main.go:141] libmachine: (no-preload-301942) <domain type='kvm'>
	I0429 13:50:15.097496  919134 main.go:141] libmachine: (no-preload-301942)   <name>no-preload-301942</name>
	I0429 13:50:15.097508  919134 main.go:141] libmachine: (no-preload-301942)   <memory unit='MiB'>2200</memory>
	I0429 13:50:15.097520  919134 main.go:141] libmachine: (no-preload-301942)   <vcpu>2</vcpu>
	I0429 13:50:15.097529  919134 main.go:141] libmachine: (no-preload-301942)   <features>
	I0429 13:50:15.097547  919134 main.go:141] libmachine: (no-preload-301942)     <acpi/>
	I0429 13:50:15.097555  919134 main.go:141] libmachine: (no-preload-301942)     <apic/>
	I0429 13:50:15.097563  919134 main.go:141] libmachine: (no-preload-301942)     <pae/>
	I0429 13:50:15.097593  919134 main.go:141] libmachine: (no-preload-301942)     
	I0429 13:50:15.097619  919134 main.go:141] libmachine: (no-preload-301942)   </features>
	I0429 13:50:15.097630  919134 main.go:141] libmachine: (no-preload-301942)   <cpu mode='host-passthrough'>
	I0429 13:50:15.097637  919134 main.go:141] libmachine: (no-preload-301942)   
	I0429 13:50:15.097647  919134 main.go:141] libmachine: (no-preload-301942)   </cpu>
	I0429 13:50:15.097653  919134 main.go:141] libmachine: (no-preload-301942)   <os>
	I0429 13:50:15.097667  919134 main.go:141] libmachine: (no-preload-301942)     <type>hvm</type>
	I0429 13:50:15.097674  919134 main.go:141] libmachine: (no-preload-301942)     <boot dev='cdrom'/>
	I0429 13:50:15.097686  919134 main.go:141] libmachine: (no-preload-301942)     <boot dev='hd'/>
	I0429 13:50:15.097697  919134 main.go:141] libmachine: (no-preload-301942)     <bootmenu enable='no'/>
	I0429 13:50:15.097705  919134 main.go:141] libmachine: (no-preload-301942)   </os>
	I0429 13:50:15.097719  919134 main.go:141] libmachine: (no-preload-301942)   <devices>
	I0429 13:50:15.097728  919134 main.go:141] libmachine: (no-preload-301942)     <disk type='file' device='cdrom'>
	I0429 13:50:15.097742  919134 main.go:141] libmachine: (no-preload-301942)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/boot2docker.iso'/>
	I0429 13:50:15.097787  919134 main.go:141] libmachine: (no-preload-301942)       <target dev='hdc' bus='scsi'/>
	I0429 13:50:15.097809  919134 main.go:141] libmachine: (no-preload-301942)       <readonly/>
	I0429 13:50:15.097824  919134 main.go:141] libmachine: (no-preload-301942)     </disk>
	I0429 13:50:15.097832  919134 main.go:141] libmachine: (no-preload-301942)     <disk type='file' device='disk'>
	I0429 13:50:15.097847  919134 main.go:141] libmachine: (no-preload-301942)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 13:50:15.097862  919134 main.go:141] libmachine: (no-preload-301942)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/no-preload-301942.rawdisk'/>
	I0429 13:50:15.097876  919134 main.go:141] libmachine: (no-preload-301942)       <target dev='hda' bus='virtio'/>
	I0429 13:50:15.097884  919134 main.go:141] libmachine: (no-preload-301942)     </disk>
	I0429 13:50:15.097895  919134 main.go:141] libmachine: (no-preload-301942)     <interface type='network'>
	I0429 13:50:15.097903  919134 main.go:141] libmachine: (no-preload-301942)       <source network='mk-no-preload-301942'/>
	I0429 13:50:15.097911  919134 main.go:141] libmachine: (no-preload-301942)       <model type='virtio'/>
	I0429 13:50:15.097921  919134 main.go:141] libmachine: (no-preload-301942)     </interface>
	I0429 13:50:15.097931  919134 main.go:141] libmachine: (no-preload-301942)     <interface type='network'>
	I0429 13:50:15.097941  919134 main.go:141] libmachine: (no-preload-301942)       <source network='default'/>
	I0429 13:50:15.097950  919134 main.go:141] libmachine: (no-preload-301942)       <model type='virtio'/>
	I0429 13:50:15.097965  919134 main.go:141] libmachine: (no-preload-301942)     </interface>
	I0429 13:50:15.097993  919134 main.go:141] libmachine: (no-preload-301942)     <serial type='pty'>
	I0429 13:50:15.098017  919134 main.go:141] libmachine: (no-preload-301942)       <target port='0'/>
	I0429 13:50:15.098042  919134 main.go:141] libmachine: (no-preload-301942)     </serial>
	I0429 13:50:15.098054  919134 main.go:141] libmachine: (no-preload-301942)     <console type='pty'>
	I0429 13:50:15.098063  919134 main.go:141] libmachine: (no-preload-301942)       <target type='serial' port='0'/>
	I0429 13:50:15.098073  919134 main.go:141] libmachine: (no-preload-301942)     </console>
	I0429 13:50:15.098081  919134 main.go:141] libmachine: (no-preload-301942)     <rng model='virtio'>
	I0429 13:50:15.098098  919134 main.go:141] libmachine: (no-preload-301942)       <backend model='random'>/dev/random</backend>
	I0429 13:50:15.098110  919134 main.go:141] libmachine: (no-preload-301942)     </rng>
	I0429 13:50:15.098117  919134 main.go:141] libmachine: (no-preload-301942)     
	I0429 13:50:15.098125  919134 main.go:141] libmachine: (no-preload-301942)     
	I0429 13:50:15.098147  919134 main.go:141] libmachine: (no-preload-301942)   </devices>
	I0429 13:50:15.098161  919134 main.go:141] libmachine: (no-preload-301942) </domain>
	I0429 13:50:15.098177  919134 main.go:141] libmachine: (no-preload-301942) 
	I0429 13:50:15.103207  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:f6:20:b0 in network default
	I0429 13:50:15.103953  919134 main.go:141] libmachine: (no-preload-301942) Ensuring networks are active...
	I0429 13:50:15.103986  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:15.105239  919134 main.go:141] libmachine: (no-preload-301942) Ensuring network default is active
	I0429 13:50:15.105654  919134 main.go:141] libmachine: (no-preload-301942) Ensuring network mk-no-preload-301942 is active
	I0429 13:50:15.106238  919134 main.go:141] libmachine: (no-preload-301942) Getting domain xml...
	I0429 13:50:15.107141  919134 main.go:141] libmachine: (no-preload-301942) Creating domain...
	I0429 13:50:15.816897  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0429 13:50:15.816939  919134 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.4240432s
	I0429 13:50:15.816960  919134 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0429 13:50:16.323723  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 exists
	I0429 13:50:16.323758  919134 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0" took 1.930893959s
	I0429 13:50:16.323775  919134 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0429 13:50:16.328410  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0429 13:50:16.328442  919134 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0" took 1.935615361s
	I0429 13:50:16.328454  919134 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0429 13:50:16.337237  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0429 13:50:16.337272  919134 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0" took 1.944348532s
	I0429 13:50:16.337288  919134 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0429 13:50:16.350125  919134 cache.go:157] /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0429 13:50:16.350158  919134 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0" took 1.957379971s
	I0429 13:50:16.350170  919134 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0429 13:50:16.350190  919134 cache.go:87] Successfully saved all images to host disk.
	I0429 13:50:17.250234  919134 main.go:141] libmachine: (no-preload-301942) Waiting to get IP...
	I0429 13:50:17.251091  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:17.251602  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:17.251627  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:17.251585  919169 retry.go:31] will retry after 281.994457ms: waiting for machine to come up
	I0429 13:50:17.535540  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:17.536080  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:17.536107  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:17.536045  919169 retry.go:31] will retry after 322.982246ms: waiting for machine to come up
	I0429 13:50:17.860643  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:17.861388  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:17.861420  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:17.861335  919169 retry.go:31] will retry after 446.702671ms: waiting for machine to come up
	I0429 13:50:18.310456  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:18.311239  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:18.311273  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:18.311193  919169 retry.go:31] will retry after 497.51088ms: waiting for machine to come up
	I0429 13:50:18.809928  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:18.810462  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:18.810493  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:18.810410  919169 retry.go:31] will retry after 509.953214ms: waiting for machine to come up
	I0429 13:50:17.478893  919444 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:50:17.478975  919444 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 13:50:17.478994  919444 cache.go:56] Caching tarball of preloaded images
	I0429 13:50:17.479144  919444 preload.go:173] Found /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 13:50:17.479168  919444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 13:50:17.479334  919444 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/config.json ...
	I0429 13:50:17.479399  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/config.json: {Name:mkca47fb2fbc9a743c10ae5b852ab96d0c7c3058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:50:17.479660  919444 start.go:360] acquireMachinesLock for embed-certs-954581: {Name:mk2ade588bdb49db467a1dc12f643b97a2b1f5bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:50:18.100289  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:20.595440  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:19.322433  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:19.323177  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:19.323215  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:19.323077  919169 retry.go:31] will retry after 705.195479ms: waiting for machine to come up
	I0429 13:50:20.029430  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:20.029952  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:20.029983  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:20.029895  919169 retry.go:31] will retry after 1.070457514s: waiting for machine to come up
	I0429 13:50:21.102218  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:21.102792  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:21.102853  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:21.102755  919169 retry.go:31] will retry after 1.2238304s: waiting for machine to come up
	I0429 13:50:22.329052  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:22.329671  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:22.329697  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:22.329606  919169 retry.go:31] will retry after 1.385246734s: waiting for machine to come up
	I0429 13:50:23.716577  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:23.717345  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:23.717379  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:23.717287  919169 retry.go:31] will retry after 1.569748013s: waiting for machine to come up
	I0429 13:50:23.096395  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:25.594532  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:27.595225  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:25.288695  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:25.289272  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:25.289302  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:25.289224  919169 retry.go:31] will retry after 1.89390905s: waiting for machine to come up
	I0429 13:50:27.185386  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:27.185906  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:27.185945  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:27.185820  919169 retry.go:31] will retry after 3.391341067s: waiting for machine to come up
	I0429 13:50:30.095415  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:32.594923  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:30.578512  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:30.579236  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:30.579292  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:30.579152  919169 retry.go:31] will retry after 3.587589732s: waiting for machine to come up
	I0429 13:50:34.171105  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:34.171890  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find current IP address of domain no-preload-301942 in network mk-no-preload-301942
	I0429 13:50:34.171924  919134 main.go:141] libmachine: (no-preload-301942) DBG | I0429 13:50:34.171817  919169 retry.go:31] will retry after 5.321567172s: waiting for machine to come up
	I0429 13:50:34.595127  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:36.595603  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:39.664237  916079 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 13:50:39.664595  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:50:39.664825  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:50:39.093863  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:41.594132  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:44.087226  919444 start.go:364] duration metric: took 26.607508523s to acquireMachinesLock for "embed-certs-954581"
	I0429 13:50:44.087305  919444 start.go:93] Provisioning new machine with config: &{Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:50:44.087479  919444 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 13:50:39.495881  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:39.496984  919134 main.go:141] libmachine: (no-preload-301942) Found IP for machine: 192.168.72.248
	I0429 13:50:39.497036  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has current primary IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:39.497048  919134 main.go:141] libmachine: (no-preload-301942) Reserving static IP address...
	I0429 13:50:39.497537  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find host DHCP lease matching {name: "no-preload-301942", mac: "52:54:00:30:7e:ee", ip: "192.168.72.248"} in network mk-no-preload-301942
	I0429 13:50:39.604213  919134 main.go:141] libmachine: (no-preload-301942) DBG | Getting to WaitForSSH function...
	I0429 13:50:39.604251  919134 main.go:141] libmachine: (no-preload-301942) Reserved static IP address: 192.168.72.248
	I0429 13:50:39.604264  919134 main.go:141] libmachine: (no-preload-301942) Waiting for SSH to be available...
	I0429 13:50:39.607385  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:39.607937  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942
	I0429 13:50:39.607966  919134 main.go:141] libmachine: (no-preload-301942) DBG | unable to find defined IP address of network mk-no-preload-301942 interface with MAC address 52:54:00:30:7e:ee
	I0429 13:50:39.608095  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH client type: external
	I0429 13:50:39.608146  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa (-rw-------)
	I0429 13:50:39.608195  919134 main.go:141] libmachine: (no-preload-301942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:50:39.608219  919134 main.go:141] libmachine: (no-preload-301942) DBG | About to run SSH command:
	I0429 13:50:39.608238  919134 main.go:141] libmachine: (no-preload-301942) DBG | exit 0
	I0429 13:50:39.612398  919134 main.go:141] libmachine: (no-preload-301942) DBG | SSH cmd err, output: exit status 255: 
	I0429 13:50:39.612434  919134 main.go:141] libmachine: (no-preload-301942) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 13:50:39.612467  919134 main.go:141] libmachine: (no-preload-301942) DBG | command : exit 0
	I0429 13:50:39.612478  919134 main.go:141] libmachine: (no-preload-301942) DBG | err     : exit status 255
	I0429 13:50:39.612486  919134 main.go:141] libmachine: (no-preload-301942) DBG | output  : 
	I0429 13:50:42.613141  919134 main.go:141] libmachine: (no-preload-301942) DBG | Getting to WaitForSSH function...
	I0429 13:50:42.616037  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.616489  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.616521  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.616752  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH client type: external
	I0429 13:50:42.616800  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa (-rw-------)
	I0429 13:50:42.616838  919134 main.go:141] libmachine: (no-preload-301942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:50:42.616853  919134 main.go:141] libmachine: (no-preload-301942) DBG | About to run SSH command:
	I0429 13:50:42.616888  919134 main.go:141] libmachine: (no-preload-301942) DBG | exit 0
	I0429 13:50:42.740168  919134 main.go:141] libmachine: (no-preload-301942) DBG | SSH cmd err, output: <nil>: 
	I0429 13:50:42.740617  919134 main.go:141] libmachine: (no-preload-301942) KVM machine creation complete!
	I0429 13:50:42.740847  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetConfigRaw
	I0429 13:50:42.741473  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:42.741723  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:42.741935  919134 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 13:50:42.741952  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:50:42.743248  919134 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 13:50:42.743268  919134 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 13:50:42.743276  919134 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 13:50:42.743284  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:42.745486  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.745908  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.745963  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.746103  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:42.746325  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.746498  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.746646  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:42.746854  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:42.747114  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:42.747127  919134 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 13:50:42.851272  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:50:42.851306  919134 main.go:141] libmachine: Detecting the provisioner...
	I0429 13:50:42.851318  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:42.854501  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.854939  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.854967  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.855177  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:42.855428  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.855614  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.855788  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:42.855976  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:42.856225  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:42.856240  919134 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 13:50:42.960863  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 13:50:42.961027  919134 main.go:141] libmachine: found compatible host: buildroot
	I0429 13:50:42.961048  919134 main.go:141] libmachine: Provisioning with buildroot...
	I0429 13:50:42.961061  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:42.961409  919134 buildroot.go:166] provisioning hostname "no-preload-301942"
	I0429 13:50:42.961443  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:42.961668  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:42.965374  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.965807  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:42.965858  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:42.966159  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:42.966442  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.966670  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:42.966824  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:42.967046  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:42.967254  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:42.967268  919134 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-301942 && echo "no-preload-301942" | sudo tee /etc/hostname
	I0429 13:50:43.089711  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-301942
	
	I0429 13:50:43.089779  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.093425  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.093857  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.093912  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.094124  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.094376  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.094579  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.094715  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.094944  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:43.095144  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:43.095162  919134 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-301942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-301942/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-301942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:50:43.211925  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:50:43.211960  919134 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:50:43.212002  919134 buildroot.go:174] setting up certificates
	I0429 13:50:43.212034  919134 provision.go:84] configureAuth start
	I0429 13:50:43.212046  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetMachineName
	I0429 13:50:43.212387  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:43.215462  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.215970  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.216001  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.216365  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.219318  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.219736  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.219759  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.219939  919134 provision.go:143] copyHostCerts
	I0429 13:50:43.220021  919134 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:50:43.220036  919134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:50:43.220126  919134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:50:43.220231  919134 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:50:43.220243  919134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:50:43.220285  919134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:50:43.220361  919134 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:50:43.220372  919134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:50:43.220408  919134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:50:43.220469  919134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.no-preload-301942 san=[127.0.0.1 192.168.72.248 localhost minikube no-preload-301942]
	I0429 13:50:43.366687  919134 provision.go:177] copyRemoteCerts
	I0429 13:50:43.366767  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:50:43.366795  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.370227  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.370514  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.370549  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.370809  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.371087  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.371290  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.371506  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:43.455647  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:50:43.485751  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 13:50:43.514982  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 13:50:43.547132  919134 provision.go:87] duration metric: took 335.077968ms to configureAuth
	I0429 13:50:43.547181  919134 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:50:43.547440  919134 config.go:182] Loaded profile config "no-preload-301942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:50:43.547589  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.550839  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.551280  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.551312  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.551550  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.551807  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.552018  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.552204  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.552410  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:43.552642  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:43.552664  919134 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:50:43.833557  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:50:43.833590  919134 main.go:141] libmachine: Checking connection to Docker...
	I0429 13:50:43.833599  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetURL
	I0429 13:50:43.834855  919134 main.go:141] libmachine: (no-preload-301942) DBG | Using libvirt version 6000000
	I0429 13:50:43.837263  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.837642  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.837674  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.837907  919134 main.go:141] libmachine: Docker is up and running!
	I0429 13:50:43.837922  919134 main.go:141] libmachine: Reticulating splines...
	I0429 13:50:43.837930  919134 client.go:171] duration metric: took 29.420004419s to LocalClient.Create
	I0429 13:50:43.837958  919134 start.go:167] duration metric: took 29.420068339s to libmachine.API.Create "no-preload-301942"
	I0429 13:50:43.837979  919134 start.go:293] postStartSetup for "no-preload-301942" (driver="kvm2")
	I0429 13:50:43.837989  919134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:50:43.838008  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:43.838266  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:50:43.838292  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.840823  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.841166  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.841199  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.841350  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.841546  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.841745  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.841892  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:43.925132  919134 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:50:43.930411  919134 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:50:43.930445  919134 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:50:43.930527  919134 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:50:43.930623  919134 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:50:43.930723  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:50:43.942672  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:50:43.971801  919134 start.go:296] duration metric: took 133.803879ms for postStartSetup
	I0429 13:50:43.971890  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetConfigRaw
	I0429 13:50:43.972556  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:43.975803  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.976229  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.976260  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.976596  919134 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/config.json ...
	I0429 13:50:43.976868  919134 start.go:128] duration metric: took 29.583714024s to createHost
	I0429 13:50:43.976900  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:43.979259  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.979636  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:43.979676  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:43.979836  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:43.980088  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.980243  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:43.980361  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:43.980517  919134 main.go:141] libmachine: Using SSH client type: native
	I0429 13:50:43.980720  919134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.248 22 <nil> <nil>}
	I0429 13:50:43.980736  919134 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:50:44.087014  919134 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714398644.076196611
	
	I0429 13:50:44.087051  919134 fix.go:216] guest clock: 1714398644.076196611
	I0429 13:50:44.087059  919134 fix.go:229] Guest: 2024-04-29 13:50:44.076196611 +0000 UTC Remote: 2024-04-29 13:50:43.976884358 +0000 UTC m=+29.734542335 (delta=99.312253ms)
	I0429 13:50:44.087088  919134 fix.go:200] guest clock delta is within tolerance: 99.312253ms
	I0429 13:50:44.087095  919134 start.go:83] releasing machines lock for "no-preload-301942", held for 29.694148543s
	I0429 13:50:44.087135  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.087477  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:44.091052  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.091524  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:44.091555  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.091785  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.092442  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.092678  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:50:44.092782  919134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:50:44.092842  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:44.092962  919134 ssh_runner.go:195] Run: cat /version.json
	I0429 13:50:44.092982  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:50:44.096519  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.096813  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.096868  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:44.096891  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.097103  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:44.097327  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:44.097344  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:44.097361  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:44.097505  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:44.097661  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:50:44.097757  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:44.097818  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:50:44.097945  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:50:44.098057  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:50:44.214425  919134 ssh_runner.go:195] Run: systemctl --version
	I0429 13:50:44.223033  919134 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:50:44.405327  919134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:50:44.412344  919134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:50:44.412416  919134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:50:44.431971  919134 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:50:44.432013  919134 start.go:494] detecting cgroup driver to use...
	I0429 13:50:44.432107  919134 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:50:44.451682  919134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:50:44.468282  919134 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:50:44.468358  919134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:50:44.483997  919134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:50:44.500600  919134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:50:44.622246  919134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:50:44.786391  919134 docker.go:233] disabling docker service ...
	I0429 13:50:44.786480  919134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:50:44.803848  919134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:50:44.819965  919134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:50:44.970777  919134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:50:45.096781  919134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:50:45.115097  919134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:50:45.138507  919134 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:50:45.138569  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.151154  919134 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:50:45.151259  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.163661  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.179765  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.195459  919134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:50:45.210796  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.222690  919134 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.245769  919134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:50:45.258607  919134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:50:45.273155  919134 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 13:50:45.273257  919134 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 13:50:45.294632  919134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:50:45.306540  919134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:50:45.449440  919134 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:50:45.615354  919134 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:50:45.615506  919134 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:50:45.621071  919134 start.go:562] Will wait 60s for crictl version
	I0429 13:50:45.621161  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:45.625720  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:50:45.671281  919134 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:50:45.671483  919134 ssh_runner.go:195] Run: crio --version
	I0429 13:50:45.711166  919134 ssh_runner.go:195] Run: crio --version
	I0429 13:50:45.746855  919134 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:50:44.090086  919444 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 13:50:44.090318  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:50:44.090378  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:50:44.111941  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0429 13:50:44.112397  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:50:44.113093  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:50:44.113125  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:50:44.113486  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:50:44.113684  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:50:44.113843  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:50:44.114029  919444 start.go:159] libmachine.API.Create for "embed-certs-954581" (driver="kvm2")
	I0429 13:50:44.114063  919444 client.go:168] LocalClient.Create starting
	I0429 13:50:44.114100  919444 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem
	I0429 13:50:44.114155  919444 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:44.114175  919444 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:44.114247  919444 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem
	I0429 13:50:44.114274  919444 main.go:141] libmachine: Decoding PEM data...
	I0429 13:50:44.114291  919444 main.go:141] libmachine: Parsing certificate...
	I0429 13:50:44.114318  919444 main.go:141] libmachine: Running pre-create checks...
	I0429 13:50:44.114330  919444 main.go:141] libmachine: (embed-certs-954581) Calling .PreCreateCheck
	I0429 13:50:44.114830  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetConfigRaw
	I0429 13:50:44.115484  919444 main.go:141] libmachine: Creating machine...
	I0429 13:50:44.115504  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Create
	I0429 13:50:44.115675  919444 main.go:141] libmachine: (embed-certs-954581) Creating KVM machine...
	I0429 13:50:44.117218  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found existing default KVM network
	I0429 13:50:44.118861  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.118696  919618 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0429 13:50:44.118917  919444 main.go:141] libmachine: (embed-certs-954581) DBG | created network xml: 
	I0429 13:50:44.118931  919444 main.go:141] libmachine: (embed-certs-954581) DBG | <network>
	I0429 13:50:44.118960  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   <name>mk-embed-certs-954581</name>
	I0429 13:50:44.118974  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   <dns enable='no'/>
	I0429 13:50:44.118984  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   
	I0429 13:50:44.118999  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 13:50:44.119009  919444 main.go:141] libmachine: (embed-certs-954581) DBG |     <dhcp>
	I0429 13:50:44.119023  919444 main.go:141] libmachine: (embed-certs-954581) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 13:50:44.119037  919444 main.go:141] libmachine: (embed-certs-954581) DBG |     </dhcp>
	I0429 13:50:44.119055  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   </ip>
	I0429 13:50:44.119065  919444 main.go:141] libmachine: (embed-certs-954581) DBG |   
	I0429 13:50:44.119074  919444 main.go:141] libmachine: (embed-certs-954581) DBG | </network>
	I0429 13:50:44.119084  919444 main.go:141] libmachine: (embed-certs-954581) DBG | 
	I0429 13:50:44.124916  919444 main.go:141] libmachine: (embed-certs-954581) DBG | trying to create private KVM network mk-embed-certs-954581 192.168.39.0/24...
	I0429 13:50:44.216272  919444 main.go:141] libmachine: (embed-certs-954581) DBG | private KVM network mk-embed-certs-954581 192.168.39.0/24 created
	I0429 13:50:44.216308  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.216216  919618 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:44.216331  919444 main.go:141] libmachine: (embed-certs-954581) Setting up store path in /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581 ...
	I0429 13:50:44.216342  919444 main.go:141] libmachine: (embed-certs-954581) Building disk image from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 13:50:44.216373  919444 main.go:141] libmachine: (embed-certs-954581) Downloading /home/jenkins/minikube-integration/18773-847310/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 13:50:44.509119  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.508943  919618 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa...
	I0429 13:50:44.591508  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.591321  919618 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/embed-certs-954581.rawdisk...
	I0429 13:50:44.591544  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Writing magic tar header
	I0429 13:50:44.591562  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Writing SSH key tar header
	I0429 13:50:44.591708  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:44.591574  919618 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581 ...
	I0429 13:50:44.591771  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581
	I0429 13:50:44.591785  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581 (perms=drwx------)
	I0429 13:50:44.591804  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube/machines (perms=drwxr-xr-x)
	I0429 13:50:44.591818  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310/.minikube (perms=drwxr-xr-x)
	I0429 13:50:44.591829  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube/machines
	I0429 13:50:44.591843  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 13:50:44.591854  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration/18773-847310 (perms=drwxrwxr-x)
	I0429 13:50:44.591868  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 13:50:44.591877  919444 main.go:141] libmachine: (embed-certs-954581) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 13:50:44.591888  919444 main.go:141] libmachine: (embed-certs-954581) Creating domain...
	I0429 13:50:44.591906  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-847310
	I0429 13:50:44.591915  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 13:50:44.591935  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home/jenkins
	I0429 13:50:44.591952  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Checking permissions on dir: /home
	I0429 13:50:44.591965  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Skipping /home - not owner
	I0429 13:50:44.593385  919444 main.go:141] libmachine: (embed-certs-954581) define libvirt domain using xml: 
	I0429 13:50:44.593425  919444 main.go:141] libmachine: (embed-certs-954581) <domain type='kvm'>
	I0429 13:50:44.593439  919444 main.go:141] libmachine: (embed-certs-954581)   <name>embed-certs-954581</name>
	I0429 13:50:44.593452  919444 main.go:141] libmachine: (embed-certs-954581)   <memory unit='MiB'>2200</memory>
	I0429 13:50:44.593463  919444 main.go:141] libmachine: (embed-certs-954581)   <vcpu>2</vcpu>
	I0429 13:50:44.593470  919444 main.go:141] libmachine: (embed-certs-954581)   <features>
	I0429 13:50:44.593481  919444 main.go:141] libmachine: (embed-certs-954581)     <acpi/>
	I0429 13:50:44.593492  919444 main.go:141] libmachine: (embed-certs-954581)     <apic/>
	I0429 13:50:44.593501  919444 main.go:141] libmachine: (embed-certs-954581)     <pae/>
	I0429 13:50:44.593522  919444 main.go:141] libmachine: (embed-certs-954581)     
	I0429 13:50:44.593557  919444 main.go:141] libmachine: (embed-certs-954581)   </features>
	I0429 13:50:44.593582  919444 main.go:141] libmachine: (embed-certs-954581)   <cpu mode='host-passthrough'>
	I0429 13:50:44.593596  919444 main.go:141] libmachine: (embed-certs-954581)   
	I0429 13:50:44.593607  919444 main.go:141] libmachine: (embed-certs-954581)   </cpu>
	I0429 13:50:44.593620  919444 main.go:141] libmachine: (embed-certs-954581)   <os>
	I0429 13:50:44.593631  919444 main.go:141] libmachine: (embed-certs-954581)     <type>hvm</type>
	I0429 13:50:44.593642  919444 main.go:141] libmachine: (embed-certs-954581)     <boot dev='cdrom'/>
	I0429 13:50:44.593653  919444 main.go:141] libmachine: (embed-certs-954581)     <boot dev='hd'/>
	I0429 13:50:44.593739  919444 main.go:141] libmachine: (embed-certs-954581)     <bootmenu enable='no'/>
	I0429 13:50:44.593775  919444 main.go:141] libmachine: (embed-certs-954581)   </os>
	I0429 13:50:44.593790  919444 main.go:141] libmachine: (embed-certs-954581)   <devices>
	I0429 13:50:44.593809  919444 main.go:141] libmachine: (embed-certs-954581)     <disk type='file' device='cdrom'>
	I0429 13:50:44.593823  919444 main.go:141] libmachine: (embed-certs-954581)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/boot2docker.iso'/>
	I0429 13:50:44.593846  919444 main.go:141] libmachine: (embed-certs-954581)       <target dev='hdc' bus='scsi'/>
	I0429 13:50:44.593859  919444 main.go:141] libmachine: (embed-certs-954581)       <readonly/>
	I0429 13:50:44.593878  919444 main.go:141] libmachine: (embed-certs-954581)     </disk>
	I0429 13:50:44.593891  919444 main.go:141] libmachine: (embed-certs-954581)     <disk type='file' device='disk'>
	I0429 13:50:44.593907  919444 main.go:141] libmachine: (embed-certs-954581)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 13:50:44.593935  919444 main.go:141] libmachine: (embed-certs-954581)       <source file='/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/embed-certs-954581.rawdisk'/>
	I0429 13:50:44.593951  919444 main.go:141] libmachine: (embed-certs-954581)       <target dev='hda' bus='virtio'/>
	I0429 13:50:44.593963  919444 main.go:141] libmachine: (embed-certs-954581)     </disk>
	I0429 13:50:44.593974  919444 main.go:141] libmachine: (embed-certs-954581)     <interface type='network'>
	I0429 13:50:44.593983  919444 main.go:141] libmachine: (embed-certs-954581)       <source network='mk-embed-certs-954581'/>
	I0429 13:50:44.593993  919444 main.go:141] libmachine: (embed-certs-954581)       <model type='virtio'/>
	I0429 13:50:44.594000  919444 main.go:141] libmachine: (embed-certs-954581)     </interface>
	I0429 13:50:44.594011  919444 main.go:141] libmachine: (embed-certs-954581)     <interface type='network'>
	I0429 13:50:44.594027  919444 main.go:141] libmachine: (embed-certs-954581)       <source network='default'/>
	I0429 13:50:44.594038  919444 main.go:141] libmachine: (embed-certs-954581)       <model type='virtio'/>
	I0429 13:50:44.594046  919444 main.go:141] libmachine: (embed-certs-954581)     </interface>
	I0429 13:50:44.594056  919444 main.go:141] libmachine: (embed-certs-954581)     <serial type='pty'>
	I0429 13:50:44.594064  919444 main.go:141] libmachine: (embed-certs-954581)       <target port='0'/>
	I0429 13:50:44.594073  919444 main.go:141] libmachine: (embed-certs-954581)     </serial>
	I0429 13:50:44.594080  919444 main.go:141] libmachine: (embed-certs-954581)     <console type='pty'>
	I0429 13:50:44.594095  919444 main.go:141] libmachine: (embed-certs-954581)       <target type='serial' port='0'/>
	I0429 13:50:44.594106  919444 main.go:141] libmachine: (embed-certs-954581)     </console>
	I0429 13:50:44.594113  919444 main.go:141] libmachine: (embed-certs-954581)     <rng model='virtio'>
	I0429 13:50:44.594125  919444 main.go:141] libmachine: (embed-certs-954581)       <backend model='random'>/dev/random</backend>
	I0429 13:50:44.594142  919444 main.go:141] libmachine: (embed-certs-954581)     </rng>
	I0429 13:50:44.594165  919444 main.go:141] libmachine: (embed-certs-954581)     
	I0429 13:50:44.594198  919444 main.go:141] libmachine: (embed-certs-954581)     
	I0429 13:50:44.594209  919444 main.go:141] libmachine: (embed-certs-954581)   </devices>
	I0429 13:50:44.594220  919444 main.go:141] libmachine: (embed-certs-954581) </domain>
	I0429 13:50:44.594234  919444 main.go:141] libmachine: (embed-certs-954581) 
	I0429 13:50:44.598938  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:9a:a9:58 in network default
	I0429 13:50:44.599584  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:44.599601  919444 main.go:141] libmachine: (embed-certs-954581) Ensuring networks are active...
	I0429 13:50:44.600452  919444 main.go:141] libmachine: (embed-certs-954581) Ensuring network default is active
	I0429 13:50:44.600784  919444 main.go:141] libmachine: (embed-certs-954581) Ensuring network mk-embed-certs-954581 is active
	I0429 13:50:44.601376  919444 main.go:141] libmachine: (embed-certs-954581) Getting domain xml...
	I0429 13:50:44.602165  919444 main.go:141] libmachine: (embed-certs-954581) Creating domain...
	I0429 13:50:46.030304  919444 main.go:141] libmachine: (embed-certs-954581) Waiting to get IP...
	I0429 13:50:46.031166  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.031816  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.031848  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.031781  919618 retry.go:31] will retry after 254.67243ms: waiting for machine to come up
	I0429 13:50:46.288591  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.289494  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.289528  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.289429  919618 retry.go:31] will retry after 297.459928ms: waiting for machine to come up
	I0429 13:50:46.589111  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.589816  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.589848  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.589774  919618 retry.go:31] will retry after 315.635792ms: waiting for machine to come up
	I0429 13:50:46.907825  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:46.908923  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:46.908956  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:46.908839  919618 retry.go:31] will retry after 450.723175ms: waiting for machine to come up
	I0429 13:50:47.361803  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:47.362390  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:47.362427  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:47.362341  919618 retry.go:31] will retry after 633.317544ms: waiting for machine to come up
	I0429 13:50:44.665038  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:50:44.665327  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:50:43.595040  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:46.100624  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:45.748432  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetIP
	I0429 13:50:45.751874  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:45.752355  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:50:45.752385  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:50:45.752670  919134 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 13:50:45.757915  919134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:50:45.780500  919134 kubeadm.go:877] updating cluster {Name:no-preload-301942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-301942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.248 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:50:45.780669  919134 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:50:45.780712  919134 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:50:45.821848  919134 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 13:50:45.821887  919134 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 13:50:45.821975  919134 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:45.822010  919134 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:45.822121  919134 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:45.822162  919134 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 13:50:45.822158  919134 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:45.822162  919134 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:45.822465  919134 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:45.822733  919134 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:45.823528  919134 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:45.823583  919134 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:45.823585  919134 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:45.823528  919134 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 13:50:45.823703  919134 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:45.823703  919134 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:45.823772  919134 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:45.824256  919134 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:45.956948  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:45.958745  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:45.963110  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:45.979658  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:45.982114  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:45.986549  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:45.999108  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 13:50:46.036991  919134 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:46.145409  919134 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 13:50:46.145479  919134 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:46.145417  919134 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 13:50:46.145543  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.145543  919134 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:46.145594  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.183007  919134 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 13:50:46.183068  919134 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:46.183124  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.237111  919134 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 13:50:46.237172  919134 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:46.237237  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.238614  919134 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 13:50:46.238660  919134 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 13:50:46.238698  919134 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:46.238763  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.238663  919134 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:46.238854  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.253235  919134 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0429 13:50:46.253304  919134 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0429 13:50:46.253366  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.262094  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 13:50:46.262145  919134 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 13:50:46.262181  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:50:46.262189  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:50:46.262199  919134 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:46.262217  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:50:46.262236  919134 ssh_runner.go:195] Run: which crictl
	I0429 13:50:46.262238  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:50:46.262244  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:50:46.264771  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0429 13:50:46.419685  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 13:50:46.419828  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 13:50:46.419897  919134 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:50:46.420241  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 13:50:46.420347  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 13:50:46.452026  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 13:50:46.452151  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 13:50:46.452194  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 13:50:46.452257  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 13:50:46.452165  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 13:50:46.452261  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 13:50:46.452033  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 13:50:46.452496  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 13:50:46.474013  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0429 13:50:46.474072  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0429 13:50:46.474148  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0429 13:50:46.474261  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0429 13:50:46.527195  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0429 13:50:46.527258  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0429 13:50:46.527279  919134 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 13:50:46.527449  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 13:50:46.527645  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0': No such file or directory
	I0429 13:50:46.527675  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0 (32674304 bytes)
	I0429 13:50:46.527719  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0': No such file or directory
	I0429 13:50:46.527752  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0 (29022720 bytes)
	I0429 13:50:46.527766  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0': No such file or directory
	I0429 13:50:46.527842  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0429 13:50:46.527864  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0 (31041024 bytes)
	I0429 13:50:46.527886  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0429 13:50:46.527772  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0': No such file or directory
	I0429 13:50:46.527931  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0 (19219456 bytes)
	I0429 13:50:46.597013  919134 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0429 13:50:46.597072  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0429 13:50:46.659810  919134 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0429 13:50:46.659909  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0429 13:50:47.442188  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0429 13:50:47.442244  919134 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 13:50:47.442305  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 13:50:47.997928  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:47.998457  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:47.998491  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:47.998402  919618 retry.go:31] will retry after 649.94283ms: waiting for machine to come up
	I0429 13:50:48.650513  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:48.651154  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:48.651201  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:48.651093  919618 retry.go:31] will retry after 1.191513652s: waiting for machine to come up
	I0429 13:50:49.844201  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:49.844874  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:49.844924  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:49.844827  919618 retry.go:31] will retry after 1.445213488s: waiting for machine to come up
	I0429 13:50:51.291628  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:51.292244  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:51.292273  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:51.292180  919618 retry.go:31] will retry after 1.132788812s: waiting for machine to come up
	I0429 13:50:48.595575  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:50.596709  905474 pod_ready.go:102] pod "kube-proxy-x79g5" in "kube-system" namespace has status "Ready":"False"
	I0429 13:50:52.088142  905474 pod_ready.go:81] duration metric: took 4m0.00111967s for pod "kube-proxy-x79g5" in "kube-system" namespace to be "Ready" ...
	E0429 13:50:52.088182  905474 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "kube-proxy-x79g5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 13:50:52.088226  905474 pod_ready.go:38] duration metric: took 4m8.040584265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:50:52.088261  905474 kubeadm.go:591] duration metric: took 4m20.995114758s to restartPrimaryControlPlane
	W0429 13:50:52.088344  905474 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 13:50:52.088383  905474 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 13:50:49.373350  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.931007175s)
	I0429 13:50:49.373404  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 13:50:49.373444  919134 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 13:50:49.373517  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 13:50:51.574587  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.201020256s)
	I0429 13:50:51.574634  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 13:50:51.574669  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 13:50:51.574725  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 13:50:54.167651  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.592875106s)
	I0429 13:50:54.167760  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 13:50:54.167856  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 13:50:54.167957  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 13:50:52.427521  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:52.428171  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:52.428206  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:52.428098  919618 retry.go:31] will retry after 1.655977729s: waiting for machine to come up
	I0429 13:50:54.086567  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:54.087168  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:54.087208  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:54.087065  919618 retry.go:31] will retry after 2.560858802s: waiting for machine to come up
	I0429 13:50:56.650010  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:50:56.650639  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:50:56.650670  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:50:56.650606  919618 retry.go:31] will retry after 3.561933506s: waiting for machine to come up
	I0429 13:50:54.664942  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:50:54.665230  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:50:56.456830  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.288835544s)
	I0429 13:50:56.456870  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 13:50:56.456903  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 13:50:56.456967  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 13:50:59.159348  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.702341445s)
	I0429 13:50:59.159408  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 13:50:59.159450  919134 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 13:50:59.159540  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 13:51:00.214175  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:00.215095  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:51:00.215130  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:51:00.215032  919618 retry.go:31] will retry after 4.090008393s: waiting for machine to come up
	I0429 13:51:01.630259  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.470684812s)
	I0429 13:51:01.630307  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 13:51:01.630355  919134 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 13:51:01.630429  919134 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 13:51:04.307738  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:04.308523  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find current IP address of domain embed-certs-954581 in network mk-embed-certs-954581
	I0429 13:51:04.308557  919444 main.go:141] libmachine: (embed-certs-954581) DBG | I0429 13:51:04.308465  919618 retry.go:31] will retry after 4.84749516s: waiting for machine to come up
	I0429 13:51:05.919531  919134 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.289070177s)
	I0429 13:51:05.919584  919134 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18773-847310/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 13:51:05.919626  919134 cache_images.go:123] Successfully loaded all cached images
	I0429 13:51:05.919634  919134 cache_images.go:92] duration metric: took 20.097728085s to LoadCachedImages
	I0429 13:51:05.919646  919134 kubeadm.go:928] updating node { 192.168.72.248 8443 v1.30.0 crio true true} ...
	I0429 13:51:05.919803  919134 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-301942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-301942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:51:05.919879  919134 ssh_runner.go:195] Run: crio config
	I0429 13:51:05.974005  919134 cni.go:84] Creating CNI manager for ""
	I0429 13:51:05.974035  919134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:05.974045  919134 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:51:05.974087  919134 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.248 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-301942 NodeName:no-preload-301942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:51:05.974283  919134 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-301942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:51:05.974366  919134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:51:05.986160  919134 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 13:51:05.986248  919134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 13:51:05.998572  919134 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 13:51:05.998593  919134 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 13:51:05.998689  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 13:51:05.998729  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 13:51:05.998578  919134 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 13:51:05.998863  919134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:51:06.007136  919134 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 13:51:06.007184  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 13:51:06.007417  919134 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 13:51:06.007451  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 13:51:06.025794  919134 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 13:51:06.086289  919134 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 13:51:06.086353  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 13:51:06.893595  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:51:06.906058  919134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 13:51:06.926657  919134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:51:06.946685  919134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 13:51:06.967107  919134 ssh_runner.go:195] Run: grep 192.168.72.248	control-plane.minikube.internal$ /etc/hosts
	I0429 13:51:06.971804  919134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:51:06.987553  919134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:07.114075  919134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:07.133090  919134 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942 for IP: 192.168.72.248
	I0429 13:51:07.133121  919134 certs.go:194] generating shared ca certs ...
	I0429 13:51:07.133144  919134 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.133374  919134 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:51:07.133435  919134 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:51:07.133449  919134 certs.go:256] generating profile certs ...
	I0429 13:51:07.133557  919134 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.key
	I0429 13:51:07.133578  919134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.crt with IP's: []
	I0429 13:51:07.260676  919134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.crt ...
	I0429 13:51:07.260725  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.crt: {Name:mkb41553d5f76c917cb52d4509ddc4e17f9afc1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.260937  919134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.key ...
	I0429 13:51:07.260950  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/client.key: {Name:mkb3f0986631b04f64ba4141a2169b40442bc714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.261035  919134 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6
	I0429 13:51:07.261051  919134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.248]
	I0429 13:51:07.407286  919134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6 ...
	I0429 13:51:07.407332  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6: {Name:mk5dd232d592ed352287750e0666fbc7bd901057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.407559  919134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6 ...
	I0429 13:51:07.407578  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6: {Name:mkf859d5ccfc99dc996fedeca0ebc39e6ef5d546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.407656  919134 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt.7cab61f6 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt
	I0429 13:51:07.407733  919134 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key.7cab61f6 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key
	I0429 13:51:07.407795  919134 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key
	I0429 13:51:07.407815  919134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt with IP's: []
	I0429 13:51:07.623612  919134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt ...
	I0429 13:51:07.623651  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt: {Name:mk9e40ae0d749113254c26277758142a36d613ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.623846  919134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key ...
	I0429 13:51:07.623868  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key: {Name:mk0982c712a049a37b9b7146b2ab2d48ef52573a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:07.624140  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:51:07.624198  919134 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:51:07.624211  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:51:07.624231  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:51:07.624256  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:51:07.624278  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:51:07.624321  919134 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:51:07.624966  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:51:07.654509  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:51:07.684299  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:51:07.712923  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:51:07.741578  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 13:51:07.769477  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 13:51:07.799962  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:51:07.836412  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/no-preload-301942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 13:51:07.869109  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:51:07.898074  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:51:07.928928  919134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:51:07.956799  919134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:51:07.978029  919134 ssh_runner.go:195] Run: openssl version
	I0429 13:51:07.985120  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:51:08.000058  919134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:08.005630  919134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:08.005714  919134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:08.012796  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:51:08.028428  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:51:08.043976  919134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:51:08.049651  919134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:51:08.049732  919134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:51:08.056596  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:51:08.071493  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:51:08.085158  919134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:51:08.090671  919134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:51:08.090765  919134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:51:08.097571  919134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:51:08.111342  919134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:51:08.116148  919134 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 13:51:08.116214  919134 kubeadm.go:391] StartCluster: {Name:no-preload-301942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-301942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.248 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:51:08.116289  919134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:51:08.116341  919134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:51:08.160313  919134 cri.go:89] found id: ""
	I0429 13:51:08.160417  919134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:51:08.172726  919134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:51:08.185461  919134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:51:08.197620  919134 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:51:08.197648  919134 kubeadm.go:156] found existing configuration files:
	
	I0429 13:51:08.197702  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:51:08.209739  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:51:08.209903  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:51:08.222364  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:51:08.234620  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:51:08.234713  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:51:08.247057  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:51:08.259300  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:51:08.259402  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:51:08.273528  919134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:51:08.286481  919134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:51:08.286573  919134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:51:08.300494  919134 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:51:08.378445  919134 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 13:51:08.378541  919134 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:51:08.508565  919134 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:51:08.508693  919134 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:51:08.508794  919134 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:51:08.818230  919134 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:51:08.820730  919134 out.go:204]   - Generating certificates and keys ...
	I0429 13:51:08.820871  919134 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:51:08.820954  919134 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:51:08.955234  919134 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 13:51:09.042524  919134 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 13:51:09.386399  919134 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 13:51:09.590576  919134 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 13:51:09.668811  919134 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 13:51:09.669024  919134 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-301942] and IPs [192.168.72.248 127.0.0.1 ::1]
	I0429 13:51:09.895601  919134 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 13:51:09.895815  919134 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-301942] and IPs [192.168.72.248 127.0.0.1 ::1]
	I0429 13:51:10.190457  919134 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 13:51:10.537992  919134 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 13:51:10.953831  919134 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 13:51:10.955117  919134 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:51:11.073119  919134 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:51:11.208877  919134 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:51:11.398537  919134 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:51:11.610750  919134 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:51:11.896751  919134 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:51:11.897413  919134 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:51:11.900746  919134 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:51:09.158039  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.159434  919444 main.go:141] libmachine: (embed-certs-954581) Found IP for machine: 192.168.39.231
	I0429 13:51:09.159490  919444 main.go:141] libmachine: (embed-certs-954581) Reserving static IP address...
	I0429 13:51:09.159506  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has current primary IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.160452  919444 main.go:141] libmachine: (embed-certs-954581) DBG | unable to find host DHCP lease matching {name: "embed-certs-954581", mac: "52:54:00:dc:58:c7", ip: "192.168.39.231"} in network mk-embed-certs-954581
	I0429 13:51:09.282157  919444 main.go:141] libmachine: (embed-certs-954581) Reserved static IP address: 192.168.39.231
	I0429 13:51:09.282273  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Getting to WaitForSSH function...
	I0429 13:51:09.282302  919444 main.go:141] libmachine: (embed-certs-954581) Waiting for SSH to be available...
	I0429 13:51:09.286545  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.287499  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.287614  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.287677  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Using SSH client type: external
	I0429 13:51:09.287693  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa (-rw-------)
	I0429 13:51:09.287726  919444 main.go:141] libmachine: (embed-certs-954581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 13:51:09.287742  919444 main.go:141] libmachine: (embed-certs-954581) DBG | About to run SSH command:
	I0429 13:51:09.287754  919444 main.go:141] libmachine: (embed-certs-954581) DBG | exit 0
	I0429 13:51:09.416081  919444 main.go:141] libmachine: (embed-certs-954581) DBG | SSH cmd err, output: <nil>: 
	I0429 13:51:09.416369  919444 main.go:141] libmachine: (embed-certs-954581) KVM machine creation complete!
	I0429 13:51:09.416791  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetConfigRaw
	I0429 13:51:09.417464  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:09.417757  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:09.417997  919444 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 13:51:09.418014  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:09.419718  919444 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 13:51:09.419739  919444 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 13:51:09.419748  919444 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 13:51:09.419756  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.423098  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.423604  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.423635  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.423879  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.424111  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.424310  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.424454  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.424691  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.424971  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.424991  919444 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 13:51:09.547412  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:51:09.547444  919444 main.go:141] libmachine: Detecting the provisioner...
	I0429 13:51:09.547456  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.550641  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.550917  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.550946  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.551146  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.551336  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.551490  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.551640  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.551900  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.552128  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.552143  919444 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 13:51:09.661024  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 13:51:09.661129  919444 main.go:141] libmachine: found compatible host: buildroot
	I0429 13:51:09.661138  919444 main.go:141] libmachine: Provisioning with buildroot...
	I0429 13:51:09.661148  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:51:09.661426  919444 buildroot.go:166] provisioning hostname "embed-certs-954581"
	I0429 13:51:09.661455  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:51:09.661668  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.664867  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.665322  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.665359  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.665612  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.665823  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.666081  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.666265  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.666480  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.666669  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.666683  919444 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-954581 && echo "embed-certs-954581" | sudo tee /etc/hostname
	I0429 13:51:09.795897  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-954581
	
	I0429 13:51:09.795941  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.798954  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.799330  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.799392  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.799692  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:09.799960  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.800191  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:09.800367  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:09.800575  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:09.800774  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:09.800792  919444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-954581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-954581/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-954581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:51:09.929304  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:51:09.929339  919444 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-847310/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-847310/.minikube}
	I0429 13:51:09.929380  919444 buildroot.go:174] setting up certificates
	I0429 13:51:09.929393  919444 provision.go:84] configureAuth start
	I0429 13:51:09.929405  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetMachineName
	I0429 13:51:09.929792  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:09.933601  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.934038  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.934081  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.934469  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:09.937554  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.937992  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:09.938024  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:09.938212  919444 provision.go:143] copyHostCerts
	I0429 13:51:09.938284  919444 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem, removing ...
	I0429 13:51:09.938295  919444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem
	I0429 13:51:09.938357  919444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/ca.pem (1078 bytes)
	I0429 13:51:09.938474  919444 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem, removing ...
	I0429 13:51:09.938486  919444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem
	I0429 13:51:09.938514  919444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/cert.pem (1123 bytes)
	I0429 13:51:09.938615  919444 exec_runner.go:144] found /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem, removing ...
	I0429 13:51:09.938636  919444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem
	I0429 13:51:09.938751  919444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-847310/.minikube/key.pem (1679 bytes)
	I0429 13:51:09.938891  919444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem org=jenkins.embed-certs-954581 san=[127.0.0.1 192.168.39.231 embed-certs-954581 localhost minikube]
	I0429 13:51:10.036740  919444 provision.go:177] copyRemoteCerts
	I0429 13:51:10.036843  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:51:10.036891  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.040292  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.040658  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.040687  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.040959  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.041193  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.041377  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.041566  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.127491  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:51:10.159759  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:51:10.191155  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 13:51:10.224289  919444 provision.go:87] duration metric: took 294.865013ms to configureAuth
	I0429 13:51:10.224353  919444 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:51:10.224667  919444 config.go:182] Loaded profile config "embed-certs-954581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:10.224962  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.228688  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.229184  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.229226  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.229432  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.229645  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.230130  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.230367  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.230564  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:10.230841  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:10.230869  919444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 13:51:10.549937  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 13:51:10.549971  919444 main.go:141] libmachine: Checking connection to Docker...
	I0429 13:51:10.549980  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetURL
	I0429 13:51:10.551487  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Using libvirt version 6000000
	I0429 13:51:10.554365  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.554780  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.554810  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.554952  919444 main.go:141] libmachine: Docker is up and running!
	I0429 13:51:10.554970  919444 main.go:141] libmachine: Reticulating splines...
	I0429 13:51:10.554979  919444 client.go:171] duration metric: took 26.440904795s to LocalClient.Create
	I0429 13:51:10.555005  919444 start.go:167] duration metric: took 26.440979338s to libmachine.API.Create "embed-certs-954581"
	I0429 13:51:10.555015  919444 start.go:293] postStartSetup for "embed-certs-954581" (driver="kvm2")
	I0429 13:51:10.555026  919444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:51:10.555053  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.555298  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:51:10.555317  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.557809  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.558196  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.558261  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.558426  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.558660  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.558873  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.559064  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.646843  919444 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:51:10.652239  919444 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:51:10.652283  919444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/addons for local assets ...
	I0429 13:51:10.652363  919444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-847310/.minikube/files for local assets ...
	I0429 13:51:10.652480  919444 filesync.go:149] local asset: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem -> 8546602.pem in /etc/ssl/certs
	I0429 13:51:10.652637  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:51:10.665144  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:51:10.700048  919444 start.go:296] duration metric: took 145.014358ms for postStartSetup
	I0429 13:51:10.700113  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetConfigRaw
	I0429 13:51:10.700806  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:10.704150  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.704546  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.704588  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.704894  919444 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/config.json ...
	I0429 13:51:10.705159  919444 start.go:128] duration metric: took 26.61766148s to createHost
	I0429 13:51:10.705198  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.708341  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.708695  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.708744  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.708907  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.709158  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.709373  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.709535  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.709737  919444 main.go:141] libmachine: Using SSH client type: native
	I0429 13:51:10.709975  919444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0429 13:51:10.710045  919444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:51:10.821101  919444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714398670.792988724
	
	I0429 13:51:10.821156  919444 fix.go:216] guest clock: 1714398670.792988724
	I0429 13:51:10.821186  919444 fix.go:229] Guest: 2024-04-29 13:51:10.792988724 +0000 UTC Remote: 2024-04-29 13:51:10.705180277 +0000 UTC m=+53.370769379 (delta=87.808447ms)
	I0429 13:51:10.821211  919444 fix.go:200] guest clock delta is within tolerance: 87.808447ms
	I0429 13:51:10.821219  919444 start.go:83] releasing machines lock for "embed-certs-954581", held for 26.733962848s
	I0429 13:51:10.821243  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.821536  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:10.825591  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.826071  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.826110  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.826308  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.827020  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.827253  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:10.827395  919444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:51:10.827445  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.827581  919444 ssh_runner.go:195] Run: cat /version.json
	I0429 13:51:10.827612  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:10.831679  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.831980  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.832152  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.832187  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.832370  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.832503  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:10.832533  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:10.832692  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.832695  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:10.832903  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.832909  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:10.833053  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:10.833065  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.833222  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:10.944692  919444 ssh_runner.go:195] Run: systemctl --version
	I0429 13:51:10.953884  919444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 13:51:11.313854  919444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:51:11.320655  919444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:51:11.320739  919444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:51:11.340243  919444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:51:11.340289  919444 start.go:494] detecting cgroup driver to use...
	I0429 13:51:11.340377  919444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:51:11.358760  919444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:51:11.374970  919444 docker.go:217] disabling cri-docker service (if available) ...
	I0429 13:51:11.375060  919444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 13:51:11.391326  919444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 13:51:11.407297  919444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 13:51:11.529914  919444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 13:51:11.706413  919444 docker.go:233] disabling docker service ...
	I0429 13:51:11.706566  919444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 13:51:11.727602  919444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 13:51:11.746989  919444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 13:51:11.910976  919444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 13:51:12.055236  919444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 13:51:12.074265  919444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:51:12.099619  919444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 13:51:12.099712  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.113800  919444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 13:51:12.113891  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.126548  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.140772  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.153556  919444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:51:12.166840  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.179767  919444 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.203385  919444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 13:51:12.215991  919444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:51:12.227481  919444 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 13:51:12.227557  919444 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 13:51:12.242915  919444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:51:12.254164  919444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:12.377997  919444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 13:51:12.535733  919444 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 13:51:12.535868  919444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 13:51:12.541810  919444 start.go:562] Will wait 60s for crictl version
	I0429 13:51:12.541922  919444 ssh_runner.go:195] Run: which crictl
	I0429 13:51:12.546649  919444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:51:12.591748  919444 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 13:51:12.591845  919444 ssh_runner.go:195] Run: crio --version
	I0429 13:51:12.626052  919444 ssh_runner.go:195] Run: crio --version
	I0429 13:51:12.667631  919444 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 13:51:11.903127  919134 out.go:204]   - Booting up control plane ...
	I0429 13:51:11.903273  919134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:51:11.903387  919134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:51:11.903603  919134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:51:11.922214  919134 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:51:11.924067  919134 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:51:11.924158  919134 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:51:12.092154  919134 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 13:51:12.092266  919134 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 13:51:13.093743  919134 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001687197s
	I0429 13:51:13.093883  919134 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 13:51:12.669298  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetIP
	I0429 13:51:12.672724  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:12.673072  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:12.673106  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:12.673420  919444 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 13:51:12.678957  919444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:51:12.693111  919444 kubeadm.go:877] updating cluster {Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:51:12.693235  919444 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 13:51:12.693281  919444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:51:12.729971  919444 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 13:51:12.730046  919444 ssh_runner.go:195] Run: which lz4
	I0429 13:51:12.734550  919444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 13:51:12.739336  919444 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 13:51:12.739396  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 13:51:14.460188  919444 crio.go:462] duration metric: took 1.725660282s to copy over tarball
	I0429 13:51:14.460294  919444 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 13:51:17.150670  919444 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690338757s)
	I0429 13:51:17.150720  919444 crio.go:469] duration metric: took 2.690491973s to extract the tarball
	I0429 13:51:17.150732  919444 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 13:51:17.191432  919444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 13:51:17.257369  919444 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 13:51:17.257408  919444 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:51:17.257419  919444 kubeadm.go:928] updating node { 192.168.39.231 8443 v1.30.0 crio true true} ...
	I0429 13:51:17.257577  919444 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-954581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:51:17.257676  919444 ssh_runner.go:195] Run: crio config
	I0429 13:51:17.311855  919444 cni.go:84] Creating CNI manager for ""
	I0429 13:51:17.311898  919444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:17.311914  919444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:51:17.311954  919444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-954581 NodeName:embed-certs-954581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:51:17.312211  919444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-954581"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:51:17.312315  919444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:51:17.324070  919444 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:51:17.324182  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:51:17.336225  919444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 13:51:17.357281  919444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:51:17.377704  919444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 13:51:14.664929  916079 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 13:51:14.665184  916079 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 13:51:17.769062  905474 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (25.68064629s)
	I0429 13:51:17.769159  905474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:51:17.794186  905474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:51:17.810436  905474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:51:17.823114  905474 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:51:17.823147  905474 kubeadm.go:156] found existing configuration files:
	
	I0429 13:51:17.823218  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:51:17.836532  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:51:17.836617  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:51:17.850081  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:51:17.864596  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:51:17.864683  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:51:17.878422  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:51:17.890459  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:51:17.890549  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:51:17.902981  905474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:51:17.915509  905474 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:51:17.915586  905474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:51:17.928376  905474 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:51:18.001121  905474 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 13:51:18.001214  905474 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:51:18.208783  905474 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:51:18.208956  905474 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:51:18.209083  905474 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:51:18.483982  905474 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:51:18.485806  905474 out.go:204]   - Generating certificates and keys ...
	I0429 13:51:18.485909  905474 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:51:18.485980  905474 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:51:18.486064  905474 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 13:51:18.486138  905474 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 13:51:18.486237  905474 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 13:51:18.486317  905474 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 13:51:18.486402  905474 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 13:51:18.486492  905474 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 13:51:18.486621  905474 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 13:51:18.486725  905474 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 13:51:18.486780  905474 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 13:51:18.486855  905474 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:51:18.572016  905474 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:51:18.683084  905474 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:51:18.854327  905474 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:51:18.916350  905474 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:51:19.037439  905474 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:51:19.038227  905474 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:51:19.045074  905474 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:51:19.596877  919134 kubeadm.go:309] [api-check] The API server is healthy after 6.503508213s
	I0429 13:51:19.615859  919134 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 13:51:19.639427  919134 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 13:51:19.674542  919134 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 13:51:19.674825  919134 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-301942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 13:51:19.689640  919134 kubeadm.go:309] [bootstrap-token] Using token: j7aq9o.yzlr9atiacx5a508
	I0429 13:51:19.691335  919134 out.go:204]   - Configuring RBAC rules ...
	I0429 13:51:19.691513  919134 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 13:51:19.708711  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 13:51:19.720352  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 13:51:19.726432  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 13:51:19.732813  919134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 13:51:19.738411  919134 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 13:51:20.006172  919134 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 13:51:20.518396  919134 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 13:51:21.006393  919134 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 13:51:21.007739  919134 kubeadm.go:309] 
	I0429 13:51:21.007874  919134 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 13:51:21.007913  919134 kubeadm.go:309] 
	I0429 13:51:21.008018  919134 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 13:51:21.008052  919134 kubeadm.go:309] 
	I0429 13:51:21.008113  919134 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 13:51:21.008295  919134 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 13:51:21.008387  919134 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 13:51:21.008401  919134 kubeadm.go:309] 
	I0429 13:51:21.008508  919134 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 13:51:21.008529  919134 kubeadm.go:309] 
	I0429 13:51:21.008599  919134 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 13:51:21.008614  919134 kubeadm.go:309] 
	I0429 13:51:21.008713  919134 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 13:51:21.008818  919134 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 13:51:21.008924  919134 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 13:51:21.008935  919134 kubeadm.go:309] 
	I0429 13:51:21.009050  919134 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 13:51:21.009156  919134 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 13:51:21.009170  919134 kubeadm.go:309] 
	I0429 13:51:21.009272  919134 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j7aq9o.yzlr9atiacx5a508 \
	I0429 13:51:21.009408  919134 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 13:51:21.009442  919134 kubeadm.go:309] 	--control-plane 
	I0429 13:51:21.009451  919134 kubeadm.go:309] 
	I0429 13:51:21.009553  919134 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 13:51:21.009564  919134 kubeadm.go:309] 
	I0429 13:51:21.009666  919134 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j7aq9o.yzlr9atiacx5a508 \
	I0429 13:51:21.009809  919134 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 13:51:21.011249  919134 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:51:21.011439  919134 cni.go:84] Creating CNI manager for ""
	I0429 13:51:21.011465  919134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:21.013766  919134 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 13:51:17.402878  919444 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0429 13:51:17.415086  919444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:51:17.433461  919444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:17.581402  919444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:17.605047  919444 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581 for IP: 192.168.39.231
	I0429 13:51:17.605084  919444 certs.go:194] generating shared ca certs ...
	I0429 13:51:17.605111  919444 certs.go:226] acquiring lock for ca certs: {Name:mk6be02e49d8ee7e457b7d9b7cd478c5e883ea66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.605325  919444 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key
	I0429 13:51:17.605380  919444 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key
	I0429 13:51:17.605394  919444 certs.go:256] generating profile certs ...
	I0429 13:51:17.605485  919444 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.key
	I0429 13:51:17.605508  919444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.crt with IP's: []
	I0429 13:51:17.758345  919444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.crt ...
	I0429 13:51:17.758389  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.crt: {Name:mk71b88bc301f4fb2764d7260d29f72b66fbde57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.758610  919444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.key ...
	I0429 13:51:17.758629  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/client.key: {Name:mke19177b6dd30b6b5cfe16b58aebd77cf405023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.758772  919444 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72
	I0429 13:51:17.758799  919444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231]
	I0429 13:51:17.870375  919444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72 ...
	I0429 13:51:17.870434  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72: {Name:mk095bbcf32b9206fd45d75d3fd534fd886deaf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.870704  919444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72 ...
	I0429 13:51:17.870734  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72: {Name:mk0fe99593f3e0fb6fa58e5506657b0a68dedbd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:17.870868  919444 certs.go:381] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt.a4dfbf72 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt
	I0429 13:51:17.871004  919444 certs.go:385] copying /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key.a4dfbf72 -> /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key
	I0429 13:51:17.871120  919444 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key
	I0429 13:51:17.871147  919444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt with IP's: []
	I0429 13:51:18.157584  919444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt ...
	I0429 13:51:18.157634  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt: {Name:mk8374492e4263beb7a626a1c3df0375394ea85f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:18.157816  919444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key ...
	I0429 13:51:18.157831  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key: {Name:mkada9f5504adca9793df490526a46dec967df9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:18.158011  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem (1338 bytes)
	W0429 13:51:18.158050  919444 certs.go:480] ignoring /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660_empty.pem, impossibly tiny 0 bytes
	I0429 13:51:18.158062  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 13:51:18.158087  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/ca.pem (1078 bytes)
	I0429 13:51:18.158110  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/cert.pem (1123 bytes)
	I0429 13:51:18.158133  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/certs/key.pem (1679 bytes)
	I0429 13:51:18.158172  919444 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem (1708 bytes)
	I0429 13:51:18.158763  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:51:18.191100  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:51:18.225913  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:51:18.257976  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 13:51:18.290013  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 13:51:18.320663  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:51:18.358772  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:51:18.396339  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/embed-certs-954581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 13:51:18.430308  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/ssl/certs/8546602.pem --> /usr/share/ca-certificates/8546602.pem (1708 bytes)
	I0429 13:51:18.462521  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:51:18.506457  919444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-847310/.minikube/certs/854660.pem --> /usr/share/ca-certificates/854660.pem (1338 bytes)
	I0429 13:51:18.554292  919444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:51:18.579419  919444 ssh_runner.go:195] Run: openssl version
	I0429 13:51:18.586461  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8546602.pem && ln -fs /usr/share/ca-certificates/8546602.pem /etc/ssl/certs/8546602.pem"
	I0429 13:51:18.600965  919444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8546602.pem
	I0429 13:51:18.606924  919444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 12:39 /usr/share/ca-certificates/8546602.pem
	I0429 13:51:18.607010  919444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8546602.pem
	I0429 13:51:18.614494  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8546602.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:51:18.631116  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:51:18.646180  919444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:18.653216  919444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:59 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:18.653419  919444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:51:18.662380  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:51:18.676780  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/854660.pem && ln -fs /usr/share/ca-certificates/854660.pem /etc/ssl/certs/854660.pem"
	I0429 13:51:18.693706  919444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/854660.pem
	I0429 13:51:18.699203  919444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 12:39 /usr/share/ca-certificates/854660.pem
	I0429 13:51:18.699282  919444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/854660.pem
	I0429 13:51:18.706196  919444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/854660.pem /etc/ssl/certs/51391683.0"
	I0429 13:51:18.722437  919444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:51:18.728022  919444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 13:51:18.728098  919444 kubeadm.go:391] StartCluster: {Name:embed-certs-954581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-954581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:51:18.728213  919444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 13:51:18.728319  919444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 13:51:18.791594  919444 cri.go:89] found id: ""
	I0429 13:51:18.791685  919444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:51:18.808197  919444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:51:18.820976  919444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:51:18.834915  919444 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:51:18.834943  919444 kubeadm.go:156] found existing configuration files:
	
	I0429 13:51:18.834993  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:51:18.847276  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:51:18.847394  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:51:18.861110  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:51:18.872859  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:51:18.872936  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:51:18.886991  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:51:18.902096  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:51:18.902189  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:51:18.916831  919444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:51:18.931350  919444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:51:18.931461  919444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:51:18.946903  919444 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 13:51:19.273238  919444 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:51:19.046953  905474 out.go:204]   - Booting up control plane ...
	I0429 13:51:19.047090  905474 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:51:19.047227  905474 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:51:19.047949  905474 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:51:19.077574  905474 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:51:19.078808  905474 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:51:19.078887  905474 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:51:19.269608  905474 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 13:51:19.269727  905474 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 13:51:20.272144  905474 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002535006s
	I0429 13:51:20.272275  905474 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 13:51:21.015724  919134 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 13:51:21.032514  919134 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 13:51:21.067903  919134 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:51:21.068003  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:21.068084  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-301942 minikube.k8s.io/updated_at=2024_04_29T13_51_21_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=no-preload-301942 minikube.k8s.io/primary=true
	I0429 13:51:21.274389  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:21.274497  919134 ops.go:34] apiserver oom_adj: -16
	I0429 13:51:21.774536  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:22.275433  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:22.774528  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:23.275181  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:23.775112  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:24.274762  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:26.274454  905474 kubeadm.go:309] [api-check] The API server is healthy after 6.002196479s
	I0429 13:51:26.296506  905474 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 13:51:26.319861  905474 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 13:51:26.366225  905474 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 13:51:26.366494  905474 kubeadm.go:309] [mark-control-plane] Marking the node pause-553639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 13:51:26.382944  905474 kubeadm.go:309] [bootstrap-token] Using token: c3k1q5.es7clq6k67f9ra0a
	I0429 13:51:26.384906  905474 out.go:204]   - Configuring RBAC rules ...
	I0429 13:51:26.385060  905474 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 13:51:26.392403  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 13:51:26.403806  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 13:51:26.409563  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 13:51:26.416188  905474 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 13:51:26.425965  905474 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 13:51:26.688482  905474 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 13:51:27.197073  905474 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 13:51:27.688474  905474 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 13:51:27.689675  905474 kubeadm.go:309] 
	I0429 13:51:27.689785  905474 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 13:51:27.689798  905474 kubeadm.go:309] 
	I0429 13:51:27.689893  905474 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 13:51:27.689905  905474 kubeadm.go:309] 
	I0429 13:51:27.689936  905474 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 13:51:27.690015  905474 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 13:51:27.690137  905474 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 13:51:27.690176  905474 kubeadm.go:309] 
	I0429 13:51:27.690282  905474 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 13:51:27.690457  905474 kubeadm.go:309] 
	I0429 13:51:27.690633  905474 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 13:51:27.690666  905474 kubeadm.go:309] 
	I0429 13:51:27.690738  905474 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 13:51:27.690856  905474 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 13:51:27.690945  905474 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 13:51:27.690961  905474 kubeadm.go:309] 
	I0429 13:51:27.691082  905474 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 13:51:27.691207  905474 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 13:51:27.691232  905474 kubeadm.go:309] 
	I0429 13:51:27.691345  905474 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token c3k1q5.es7clq6k67f9ra0a \
	I0429 13:51:27.691538  905474 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 13:51:27.691587  905474 kubeadm.go:309] 	--control-plane 
	I0429 13:51:27.691614  905474 kubeadm.go:309] 
	I0429 13:51:27.691803  905474 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 13:51:27.691824  905474 kubeadm.go:309] 
	I0429 13:51:27.691938  905474 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token c3k1q5.es7clq6k67f9ra0a \
	I0429 13:51:27.692083  905474 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 13:51:27.692479  905474 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 13:51:27.692512  905474 cni.go:84] Creating CNI manager for ""
	I0429 13:51:27.692542  905474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:27.694910  905474 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 13:51:27.696556  905474 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 13:51:27.715264  905474 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 13:51:27.743581  905474 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:51:27.743699  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:27.743712  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-553639 minikube.k8s.io/updated_at=2024_04_29T13_51_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=pause-553639 minikube.k8s.io/primary=true
	I0429 13:51:27.774964  905474 ops.go:34] apiserver oom_adj: -16
	I0429 13:51:27.947562  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:24.775450  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:25.275416  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:25.775133  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:26.274519  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:26.774594  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:27.275288  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:27.774962  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.274876  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.774516  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.275303  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.501525  919444 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 13:51:31.501636  919444 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 13:51:31.501753  919444 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 13:51:31.501898  919444 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 13:51:31.502025  919444 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 13:51:31.502127  919444 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:51:31.504128  919444 out.go:204]   - Generating certificates and keys ...
	I0429 13:51:31.504246  919444 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 13:51:31.504334  919444 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 13:51:31.504441  919444 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 13:51:31.504524  919444 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 13:51:31.504607  919444 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 13:51:31.504684  919444 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 13:51:31.504759  919444 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 13:51:31.504948  919444 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-954581 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0429 13:51:31.505032  919444 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 13:51:31.505217  919444 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-954581 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0429 13:51:31.505318  919444 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 13:51:31.505379  919444 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 13:51:31.505443  919444 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 13:51:31.505551  919444 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:51:31.505640  919444 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:51:31.505717  919444 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:51:31.505795  919444 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:51:31.505885  919444 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:51:31.505963  919444 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:51:31.506077  919444 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:51:31.506195  919444 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:51:31.507943  919444 out.go:204]   - Booting up control plane ...
	I0429 13:51:31.508080  919444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:51:31.508189  919444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:51:31.508267  919444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:51:31.508376  919444 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:51:31.508521  919444 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:51:31.508601  919444 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 13:51:31.508780  919444 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 13:51:31.508889  919444 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 13:51:31.508982  919444 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002067963s
	I0429 13:51:31.509109  919444 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 13:51:31.509210  919444 kubeadm.go:309] [api-check] The API server is healthy after 6.003008264s
	I0429 13:51:31.509373  919444 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 13:51:31.509549  919444 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 13:51:31.509618  919444 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 13:51:31.509822  919444 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-954581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 13:51:31.509917  919444 kubeadm.go:309] [bootstrap-token] Using token: lxayf9.a1fyv4yzj0t2zn7h
	I0429 13:51:31.511632  919444 out.go:204]   - Configuring RBAC rules ...
	I0429 13:51:31.511761  919444 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 13:51:31.511885  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 13:51:31.512092  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 13:51:31.512289  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 13:51:31.512428  919444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 13:51:31.512558  919444 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 13:51:31.512749  919444 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 13:51:31.512828  919444 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 13:51:31.512906  919444 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 13:51:31.512917  919444 kubeadm.go:309] 
	I0429 13:51:31.513004  919444 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 13:51:31.513014  919444 kubeadm.go:309] 
	I0429 13:51:31.513137  919444 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 13:51:31.513147  919444 kubeadm.go:309] 
	I0429 13:51:31.513177  919444 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 13:51:31.513262  919444 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 13:51:31.513307  919444 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 13:51:31.513317  919444 kubeadm.go:309] 
	I0429 13:51:31.513366  919444 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 13:51:31.513374  919444 kubeadm.go:309] 
	I0429 13:51:31.513440  919444 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 13:51:31.513449  919444 kubeadm.go:309] 
	I0429 13:51:31.513512  919444 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 13:51:31.513609  919444 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 13:51:31.513696  919444 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 13:51:31.513710  919444 kubeadm.go:309] 
	I0429 13:51:31.513816  919444 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 13:51:31.513948  919444 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 13:51:31.513963  919444 kubeadm.go:309] 
	I0429 13:51:31.514095  919444 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token lxayf9.a1fyv4yzj0t2zn7h \
	I0429 13:51:31.514189  919444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 \
	I0429 13:51:31.514210  919444 kubeadm.go:309] 	--control-plane 
	I0429 13:51:31.514217  919444 kubeadm.go:309] 
	I0429 13:51:31.514300  919444 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 13:51:31.514310  919444 kubeadm.go:309] 
	I0429 13:51:31.514399  919444 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token lxayf9.a1fyv4yzj0t2zn7h \
	I0429 13:51:31.514557  919444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e86e01321b0ee2400049a0f2863e21b68a0f729e3f866035e1506a51e5de529 
	I0429 13:51:31.514571  919444 cni.go:84] Creating CNI manager for ""
	I0429 13:51:31.514579  919444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 13:51:31.516453  919444 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 13:51:31.518126  919444 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 13:51:31.532420  919444 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 13:51:31.558750  919444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:51:31.558835  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.558837  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-954581 minikube.k8s.io/updated_at=2024_04_29T13_51_31_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=embed-certs-954581 minikube.k8s.io/primary=true
	I0429 13:51:31.609722  919444 ops.go:34] apiserver oom_adj: -16
	I0429 13:51:31.809368  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.310347  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.448054  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:28.947685  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.448628  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.947715  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.447986  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.947683  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.448709  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.947778  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.448685  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.948669  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:29.774752  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.275256  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:30.775382  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.274923  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:31.774577  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.274472  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:32.775106  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.275187  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.775410  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.274789  919134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.423003  919134 kubeadm.go:1107] duration metric: took 13.355093749s to wait for elevateKubeSystemPrivileges
	W0429 13:51:34.423061  919134 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 13:51:34.423078  919134 kubeadm.go:393] duration metric: took 26.306862708s to StartCluster
	I0429 13:51:34.423104  919134 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:34.423212  919134 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:51:34.424535  919134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:34.424824  919134 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 13:51:34.424833  919134 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.248 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:51:34.426632  919134 out.go:177] * Verifying Kubernetes components...
	I0429 13:51:34.424915  919134 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:51:34.425071  919134 config.go:182] Loaded profile config "no-preload-301942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:34.428518  919134 addons.go:69] Setting storage-provisioner=true in profile "no-preload-301942"
	I0429 13:51:34.428576  919134 addons.go:234] Setting addon storage-provisioner=true in "no-preload-301942"
	I0429 13:51:34.428526  919134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:34.428628  919134 host.go:66] Checking if "no-preload-301942" exists ...
	I0429 13:51:34.428531  919134 addons.go:69] Setting default-storageclass=true in profile "no-preload-301942"
	I0429 13:51:34.428737  919134 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-301942"
	I0429 13:51:34.429077  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.429108  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.429149  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.429187  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.446327  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0429 13:51:34.446889  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.447594  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.447618  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.447974  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.448234  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:51:34.448341  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0429 13:51:34.448951  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.449597  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.449621  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.450118  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.450662  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.450688  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.452793  919134 addons.go:234] Setting addon default-storageclass=true in "no-preload-301942"
	I0429 13:51:34.452849  919134 host.go:66] Checking if "no-preload-301942" exists ...
	I0429 13:51:34.453255  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.453296  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.469135  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 13:51:34.469513  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0429 13:51:34.469695  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.469929  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.470257  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.470279  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.470666  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.470810  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.470836  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.470869  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:51:34.471242  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.471884  919134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:34.471939  919134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:34.473170  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:51:34.475579  919134 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:51:34.477063  919134 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:34.477085  919134 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 13:51:34.477110  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:51:34.480514  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.480982  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:51:34.481008  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.481193  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:51:34.481437  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:51:34.481615  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:51:34.481768  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:51:34.490950  919134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0429 13:51:34.491462  919134 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:34.492150  919134 main.go:141] libmachine: Using API Version  1
	I0429 13:51:34.492170  919134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:34.492515  919134 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:34.492689  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetState
	I0429 13:51:34.494506  919134 main.go:141] libmachine: (no-preload-301942) Calling .DriverName
	I0429 13:51:34.494775  919134 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:34.494797  919134 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 13:51:34.494813  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHHostname
	I0429 13:51:34.498506  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.499044  919134 main.go:141] libmachine: (no-preload-301942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:7e:ee", ip: ""} in network mk-no-preload-301942: {Iface:virbr2 ExpiryTime:2024-04-29 14:50:30 +0000 UTC Type:0 Mac:52:54:00:30:7e:ee Iaid: IPaddr:192.168.72.248 Prefix:24 Hostname:no-preload-301942 Clientid:01:52:54:00:30:7e:ee}
	I0429 13:51:34.499079  919134 main.go:141] libmachine: (no-preload-301942) DBG | domain no-preload-301942 has defined IP address 192.168.72.248 and MAC address 52:54:00:30:7e:ee in network mk-no-preload-301942
	I0429 13:51:34.499516  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHPort
	I0429 13:51:34.499719  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHKeyPath
	I0429 13:51:34.499873  919134 main.go:141] libmachine: (no-preload-301942) Calling .GetSSHUsername
	I0429 13:51:34.500018  919134 sshutil.go:53] new ssh client: &{IP:192.168.72.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/no-preload-301942/id_rsa Username:docker}
	I0429 13:51:34.788612  919134 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 13:51:34.788641  919134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:35.097001  919134 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:35.190365  919134 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:35.336195  919134 start.go:946] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0429 13:51:35.337235  919134 node_ready.go:35] waiting up to 6m0s for node "no-preload-301942" to be "Ready" ...
	I0429 13:51:35.353623  919134 node_ready.go:49] node "no-preload-301942" has status "Ready":"True"
	I0429 13:51:35.353653  919134 node_ready.go:38] duration metric: took 16.389597ms for node "no-preload-301942" to be "Ready" ...
	I0429 13:51:35.353663  919134 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:35.378776  919134 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:35.853014  919134 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-301942" context rescaled to 1 replicas
	I0429 13:51:36.367350  919134 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.270294891s)
	I0429 13:51:36.367394  919134 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17695906s)
	I0429 13:51:36.367482  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.367497  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.367512  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.367533  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.367940  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368004  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.367936  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368036  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.368051  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.368063  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.368008  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.367982  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.368023  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.368150  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.368372  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368389  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.368462  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.368531  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.368567  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.437325  919134 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:36.437357  919134 main.go:141] libmachine: (no-preload-301942) Calling .Close
	I0429 13:51:36.437753  919134 main.go:141] libmachine: (no-preload-301942) DBG | Closing plugin on server side
	I0429 13:51:36.437804  919134 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:36.437825  919134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:36.439340  919134 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 13:51:32.809617  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.310412  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.809800  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.310197  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.809508  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.309899  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.810305  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.309614  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.809530  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:37.309911  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.448668  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:33.947980  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.448640  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:34.948660  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.448573  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:35.948403  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.448682  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.948302  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:37.448641  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:37.948648  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:36.441067  919134 addons.go:505] duration metric: took 2.016146811s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 13:51:37.387507  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:38.448662  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:38.947690  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.448636  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.947580  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.448398  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.948349  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.448191  905474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.547321  905474 kubeadm.go:1107] duration metric: took 13.803696575s to wait for elevateKubeSystemPrivileges
	W0429 13:51:41.547394  905474 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 13:51:41.547408  905474 kubeadm.go:393] duration metric: took 5m10.871748964s to StartCluster
	I0429 13:51:41.547435  905474 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:41.547564  905474 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:51:41.548842  905474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:41.549141  905474 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:51:41.551193  905474 out.go:177] * Verifying Kubernetes components...
	I0429 13:51:41.549260  905474 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:51:41.549431  905474 config.go:182] Loaded profile config "pause-553639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:41.552835  905474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:41.554194  905474 out.go:177] * Enabled addons: 
	I0429 13:51:37.810170  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:38.310105  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:38.809531  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.310243  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:39.810449  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.310068  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:40.809489  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.310362  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.810342  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:42.310093  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:41.555487  905474 addons.go:505] duration metric: took 6.225649ms for enable addons: enabled=[]
	I0429 13:51:41.722347  905474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:41.743056  905474 node_ready.go:35] waiting up to 6m0s for node "pause-553639" to be "Ready" ...
	I0429 13:51:41.753188  905474 node_ready.go:49] node "pause-553639" has status "Ready":"True"
	I0429 13:51:41.753219  905474 node_ready.go:38] duration metric: took 10.125261ms for node "pause-553639" to be "Ready" ...
	I0429 13:51:41.753232  905474 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:41.760239  905474 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:39.392479  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:41.886396  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:43.888050  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:42.810447  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:43.310397  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:43.809871  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:44.309906  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:44.810118  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:45.310204  919444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 13:51:45.434296  919444 kubeadm.go:1107] duration metric: took 13.875520942s to wait for elevateKubeSystemPrivileges
	W0429 13:51:45.434360  919444 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 13:51:45.434375  919444 kubeadm.go:393] duration metric: took 26.706283529s to StartCluster
	I0429 13:51:45.434402  919444 settings.go:142] acquiring lock: {Name:mkfc2a12c970f9efb6ef840042bb7ab028a1a307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:45.434523  919444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 13:51:45.436948  919444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-847310/kubeconfig: {Name:mkadb918f2b0432255c1cf69aa2465afc0e639c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:51:45.437337  919444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 13:51:45.437359  919444 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 13:51:45.441140  919444 out.go:177] * Verifying Kubernetes components...
	I0429 13:51:45.437430  919444 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:51:45.441228  919444 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-954581"
	I0429 13:51:45.437607  919444 config.go:182] Loaded profile config "embed-certs-954581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:51:45.441292  919444 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-954581"
	I0429 13:51:45.443096  919444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:51:45.441291  919444 addons.go:69] Setting default-storageclass=true in profile "embed-certs-954581"
	I0429 13:51:45.443218  919444 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-954581"
	I0429 13:51:45.441344  919444 host.go:66] Checking if "embed-certs-954581" exists ...
	I0429 13:51:45.443715  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.443737  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.443764  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.443770  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.461955  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0429 13:51:45.462026  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0429 13:51:45.462546  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.462599  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.463314  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.463336  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.463472  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.463499  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.463744  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.463861  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.463941  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:45.464511  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.464574  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.467569  919444 addons.go:234] Setting addon default-storageclass=true in "embed-certs-954581"
	I0429 13:51:45.467620  919444 host.go:66] Checking if "embed-certs-954581" exists ...
	I0429 13:51:45.467920  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.467975  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.482575  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0429 13:51:45.483099  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.483895  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.483920  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.484329  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.484566  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:45.487017  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:45.489722  919444 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:51:45.488682  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0429 13:51:43.769055  905474 pod_ready.go:102] pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:44.769600  905474 pod_ready.go:92] pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.769632  905474 pod_ready.go:81] duration metric: took 3.009350313s for pod "coredns-7db6d8ff4d-qbcb2" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.769651  905474 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xfhkh" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.776879  905474 pod_ready.go:92] pod "coredns-7db6d8ff4d-xfhkh" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.776908  905474 pod_ready.go:81] duration metric: took 7.248056ms for pod "coredns-7db6d8ff4d-xfhkh" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.776922  905474 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.783573  905474 pod_ready.go:92] pod "etcd-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.783608  905474 pod_ready.go:81] duration metric: took 6.675529ms for pod "etcd-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.783624  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.790571  905474 pod_ready.go:92] pod "kube-apiserver-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.790600  905474 pod_ready.go:81] duration metric: took 6.968174ms for pod "kube-apiserver-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.790611  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.796134  905474 pod_ready.go:92] pod "kube-controller-manager-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:44.796166  905474 pod_ready.go:81] duration metric: took 5.547308ms for pod "kube-controller-manager-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:44.796178  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lchdx" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.165718  905474 pod_ready.go:92] pod "kube-proxy-lchdx" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:45.165775  905474 pod_ready.go:81] duration metric: took 369.588708ms for pod "kube-proxy-lchdx" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.165809  905474 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.565232  905474 pod_ready.go:92] pod "kube-scheduler-pause-553639" in "kube-system" namespace has status "Ready":"True"
	I0429 13:51:45.565268  905474 pod_ready.go:81] duration metric: took 399.442597ms for pod "kube-scheduler-pause-553639" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:45.565280  905474 pod_ready.go:38] duration metric: took 3.812034288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:45.565301  905474 api_server.go:52] waiting for apiserver process to appear ...
	I0429 13:51:45.565375  905474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:51:45.583769  905474 api_server.go:72] duration metric: took 4.034582991s to wait for apiserver process to appear ...
	I0429 13:51:45.583811  905474 api_server.go:88] waiting for apiserver healthz status ...
	I0429 13:51:45.583842  905474 api_server.go:253] Checking apiserver healthz at https://192.168.61.170:8443/healthz ...
	I0429 13:51:45.591464  905474 api_server.go:279] https://192.168.61.170:8443/healthz returned 200:
	ok
	I0429 13:51:45.594229  905474 api_server.go:141] control plane version: v1.30.0
	I0429 13:51:45.594266  905474 api_server.go:131] duration metric: took 10.445929ms to wait for apiserver health ...
	I0429 13:51:45.594278  905474 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 13:51:45.769050  905474 system_pods.go:59] 7 kube-system pods found
	I0429 13:51:45.769102  905474 system_pods.go:61] "coredns-7db6d8ff4d-qbcb2" [ae828405-af7f-4d81-89db-04f5a8b615b8] Running
	I0429 13:51:45.769109  905474 system_pods.go:61] "coredns-7db6d8ff4d-xfhkh" [0b51d117-6754-4d5a-8191-6376818cd778] Running
	I0429 13:51:45.769114  905474 system_pods.go:61] "etcd-pause-553639" [f60f7ca2-3a92-4c8c-86c2-cc639343a932] Running
	I0429 13:51:45.769121  905474 system_pods.go:61] "kube-apiserver-pause-553639" [9e8996af-54c4-4db4-a620-40962d99808a] Running
	I0429 13:51:45.769127  905474 system_pods.go:61] "kube-controller-manager-pause-553639" [ba478e54-3f9c-425b-83cd-1ca2bddfe039] Running
	I0429 13:51:45.769136  905474 system_pods.go:61] "kube-proxy-lchdx" [b56b310e-2281-4ff2-a3c1-9c6d3e340464] Running
	I0429 13:51:45.769141  905474 system_pods.go:61] "kube-scheduler-pause-553639" [828d0a42-fc45-450d-8745-8750c45fac94] Running
	I0429 13:51:45.769150  905474 system_pods.go:74] duration metric: took 174.863248ms to wait for pod list to return data ...
	I0429 13:51:45.769161  905474 default_sa.go:34] waiting for default service account to be created ...
	I0429 13:51:45.965156  905474 default_sa.go:45] found service account: "default"
	I0429 13:51:45.965210  905474 default_sa.go:55] duration metric: took 196.039407ms for default service account to be created ...
	I0429 13:51:45.965226  905474 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 13:51:46.168792  905474 system_pods.go:86] 7 kube-system pods found
	I0429 13:51:46.168831  905474 system_pods.go:89] "coredns-7db6d8ff4d-qbcb2" [ae828405-af7f-4d81-89db-04f5a8b615b8] Running
	I0429 13:51:46.168839  905474 system_pods.go:89] "coredns-7db6d8ff4d-xfhkh" [0b51d117-6754-4d5a-8191-6376818cd778] Running
	I0429 13:51:46.168845  905474 system_pods.go:89] "etcd-pause-553639" [f60f7ca2-3a92-4c8c-86c2-cc639343a932] Running
	I0429 13:51:46.168851  905474 system_pods.go:89] "kube-apiserver-pause-553639" [9e8996af-54c4-4db4-a620-40962d99808a] Running
	I0429 13:51:46.168856  905474 system_pods.go:89] "kube-controller-manager-pause-553639" [ba478e54-3f9c-425b-83cd-1ca2bddfe039] Running
	I0429 13:51:46.168861  905474 system_pods.go:89] "kube-proxy-lchdx" [b56b310e-2281-4ff2-a3c1-9c6d3e340464] Running
	I0429 13:51:46.168881  905474 system_pods.go:89] "kube-scheduler-pause-553639" [828d0a42-fc45-450d-8745-8750c45fac94] Running
	I0429 13:51:46.168891  905474 system_pods.go:126] duration metric: took 203.656456ms to wait for k8s-apps to be running ...
	I0429 13:51:46.168906  905474 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 13:51:46.168962  905474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:51:46.185863  905474 system_svc.go:56] duration metric: took 16.943118ms WaitForService to wait for kubelet
	I0429 13:51:46.185907  905474 kubeadm.go:576] duration metric: took 4.636729164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:51:46.185935  905474 node_conditions.go:102] verifying NodePressure condition ...
	I0429 13:51:46.366600  905474 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:51:46.366629  905474 node_conditions.go:123] node cpu capacity is 2
	I0429 13:51:46.366641  905474 node_conditions.go:105] duration metric: took 180.700199ms to run NodePressure ...
	I0429 13:51:46.366653  905474 start.go:240] waiting for startup goroutines ...
	I0429 13:51:46.366660  905474 start.go:245] waiting for cluster config update ...
	I0429 13:51:46.366667  905474 start.go:254] writing updated cluster config ...
	I0429 13:51:46.367051  905474 ssh_runner.go:195] Run: rm -f paused
	I0429 13:51:46.438349  905474 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 13:51:46.440724  905474 out.go:177] * Done! kubectl is now configured to use "pause-553639" cluster and "default" namespace by default
	I0429 13:51:45.490382  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.491519  919444 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:45.491639  919444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 13:51:45.491775  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:45.492243  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.492268  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.492664  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.493587  919444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:51:45.493624  919444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:51:45.496063  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.496514  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:45.496538  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.496850  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:45.497963  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:45.498174  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:45.498342  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:45.510997  919444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0429 13:51:45.511511  919444 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:51:45.512030  919444 main.go:141] libmachine: Using API Version  1
	I0429 13:51:45.512047  919444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:51:45.512480  919444 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:51:45.512679  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetState
	I0429 13:51:45.514510  919444 main.go:141] libmachine: (embed-certs-954581) Calling .DriverName
	I0429 13:51:45.514841  919444 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:45.514856  919444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 13:51:45.514873  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHHostname
	I0429 13:51:45.518364  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.518878  919444 main.go:141] libmachine: (embed-certs-954581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:58:c7", ip: ""} in network mk-embed-certs-954581: {Iface:virbr1 ExpiryTime:2024-04-29 14:51:00 +0000 UTC Type:0 Mac:52:54:00:dc:58:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:embed-certs-954581 Clientid:01:52:54:00:dc:58:c7}
	I0429 13:51:45.518912  919444 main.go:141] libmachine: (embed-certs-954581) DBG | domain embed-certs-954581 has defined IP address 192.168.39.231 and MAC address 52:54:00:dc:58:c7 in network mk-embed-certs-954581
	I0429 13:51:45.519289  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHPort
	I0429 13:51:45.519542  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHKeyPath
	I0429 13:51:45.519740  919444 main.go:141] libmachine: (embed-certs-954581) Calling .GetSSHUsername
	I0429 13:51:45.519895  919444 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/embed-certs-954581/id_rsa Username:docker}
	I0429 13:51:45.736886  919444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:51:45.736940  919444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 13:51:45.814101  919444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 13:51:45.990224  919444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 13:51:46.406270  919444 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 13:51:46.406460  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.406488  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.407021  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.407043  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.407055  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.407064  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.407070  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.407401  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.407425  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.407450  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.407890  919444 node_ready.go:35] waiting up to 6m0s for node "embed-certs-954581" to be "Ready" ...
	I0429 13:51:46.436792  919444 node_ready.go:49] node "embed-certs-954581" has status "Ready":"True"
	I0429 13:51:46.436826  919444 node_ready.go:38] duration metric: took 28.899422ms for node "embed-certs-954581" to be "Ready" ...
	I0429 13:51:46.436838  919444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:51:46.436970  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.436995  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.437320  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.437339  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.437343  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.467842  919444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4vstk" in "kube-system" namespace to be "Ready" ...
	I0429 13:51:46.798195  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.798232  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.798714  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.798767  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.798777  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.798786  919444 main.go:141] libmachine: Making call to close driver server
	I0429 13:51:46.798795  919444 main.go:141] libmachine: (embed-certs-954581) Calling .Close
	I0429 13:51:46.799134  919444 main.go:141] libmachine: (embed-certs-954581) DBG | Closing plugin on server side
	I0429 13:51:46.799196  919444 main.go:141] libmachine: Successfully made call to close driver server
	I0429 13:51:46.799203  919444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 13:51:46.801544  919444 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0429 13:51:46.803054  919444 addons.go:505] duration metric: took 1.36562193s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0429 13:51:46.911549  919444 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-954581" context rescaled to 1 replicas
	I0429 13:51:45.888928  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	I0429 13:51:48.387703  919134 pod_ready.go:102] pod "coredns-7db6d8ff4d-cr6tc" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.675172310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398709675133555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bae2b636-7f89-4818-a212-d2ccfaf3490d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.675791816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efbdebc3-b495-4612-a255-6328b5e076c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.675925777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efbdebc3-b495-4612-a255-6328b5e076c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.676282790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efbdebc3-b495-4612-a255-6328b5e076c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.720922838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce802709-ba61-438e-b21e-971fc80079a4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.721087179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce802709-ba61-438e-b21e-971fc80079a4 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.722831177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f4da3b7-7ea5-4002-b16b-f2ee9051b2ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.723464934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398709723418467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f4da3b7-7ea5-4002-b16b-f2ee9051b2ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.724340299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc14e628-c73f-4ae0-ba2b-86d039b59285 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.724409645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc14e628-c73f-4ae0-ba2b-86d039b59285 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.724590099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc14e628-c73f-4ae0-ba2b-86d039b59285 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.774572925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82863066-c866-4422-b924-059a9826d94f name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.774656118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82863066-c866-4422-b924-059a9826d94f name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.776263171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80d29cb2-9c5f-4708-a845-fd783d02b011 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.776694283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398709776663242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80d29cb2-9c5f-4708-a845-fd783d02b011 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.777533630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=871d49e6-564d-44f3-b282-4e088e5a21c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.777646719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=871d49e6-564d-44f3-b282-4e088e5a21c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.777829734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=871d49e6-564d-44f3-b282-4e088e5a21c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.825816616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=273cd1e6-c5ee-4f46-bbfb-b4e9f8357ea0 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.825948630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=273cd1e6-c5ee-4f46-bbfb-b4e9f8357ea0 name=/runtime.v1.RuntimeService/Version
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.827495470Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec328657-6fba-4d91-aff7-eb10d7e88250 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.828604500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714398709828567503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec328657-6fba-4d91-aff7-eb10d7e88250 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.829513852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f197781c-b397-4a7c-846e-040ff3d3055d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.829581410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f197781c-b397-4a7c-846e-040ff3d3055d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 13:51:49 pause-553639 crio[3058]: time="2024-04-29 13:51:49.829758410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8,PodSandboxId:85323976f623d670833c8bf35b8f31561b6b0a25d6721c30607af8f7cf1551d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703658806635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xfhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51d117-6754-4d5a-8191-6376818cd778,},Annotations:map[string]string{io.kubernetes.container.hash: 7be5df04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1,PodSandboxId:09bf5ff189180d456b05e09433f33ff5036554f178b5af85f9ebfaff33df99a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714398703570187239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbcb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: ae828405-af7f-4d81-89db-04f5a8b615b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9faf4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c,PodSandboxId:bc3189a6510d45cc69b2af9744a5a96323abc07a96012a17b75e44cbb4b5dd1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,Cr
eatedAt:1714398703466770334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lchdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b310e-2281-4ff2-a3c1-9c6d3e340464,},Annotations:map[string]string{io.kubernetes.container.hash: b4d54e44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e,PodSandboxId:0928a1cb46ea1cdb747263b6f0b15651515be06ed31d0b43655c11afb5071de6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714398680855490996,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48d8e5966e65a3d1b22a71fd095c167,},Annotations:map[string]string{io.kubernetes.container.hash: 63d8d88d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493,PodSandboxId:d20c8ff3d81d1fd9d6a06e2a2fe29f1d9f788fd6fd8abaaa529c4b87cd0a1c9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714398680901510675,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7da9222096cbed32c57a3197ba46b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 63b7600,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8,PodSandboxId:f385f6f19141b5cf7943c86a4c1eb276c64fa3ef14b08f73c17a93e4b9036baf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714398680825509736,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4af49cb9accc0ba7b96317e152451c,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a,PodSandboxId:035a0cfb63e657a576c4234abb94e34e8988e7221206e0585966a3d83baa3410,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714398680829833285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manage
r,io.kubernetes.pod.name: kube-controller-manager-pause-553639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e436b0e985e49f253d42383b7bd9b1d0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f197781c-b397-4a7c-846e-040ff3d3055d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6ffe22d03ad3a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago       Running             coredns                   0                   85323976f623d       coredns-7db6d8ff4d-xfhkh
	71ad796050725       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago       Running             coredns                   0                   09bf5ff189180       coredns-7db6d8ff4d-qbcb2
	fdbe40c1373bf       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   6 seconds ago       Running             kube-proxy                0                   bc3189a6510d4       kube-proxy-lchdx
	483c0763a8a4c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Running             etcd                      4                   d20c8ff3d81d1       etcd-pause-553639
	3699f70dd6052       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   29 seconds ago      Running             kube-apiserver            4                   0928a1cb46ea1       kube-apiserver-pause-553639
	5bf11cf3703a3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   29 seconds ago      Running             kube-controller-manager   4                   035a0cfb63e65       kube-controller-manager-pause-553639
	a625f9cc9cb85       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   29 seconds ago      Running             kube-scheduler            4                   f385f6f19141b       kube-scheduler-pause-553639
	
	
	==> coredns [6ffe22d03ad3a4ecf80dbaf013cdc1cf1b95788d870520ad27ff1eaa12a8d8b8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [71ad7960507251718ac337eeb3b02b3c9072e6d3bcb72f0682f9fd68e821e4e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               pause-553639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-553639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=pause-553639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T13_51_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 13:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-553639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:51:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:51:47 +0000   Mon, 29 Apr 2024 13:51:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.170
	  Hostname:    pause-553639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f6180b70f984fc4bc96ef6adfcc4408
	  System UUID:                8f6180b7-0f98-4fc4-bc96-ef6adfcc4408
	  Boot ID:                    2fcfde3b-ce03-4523-81c1-289c7d65deb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-qbcb2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9s
	  kube-system                 coredns-7db6d8ff4d-xfhkh                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9s
	  kube-system                 etcd-pause-553639                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 kube-apiserver-pause-553639             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-controller-manager-pause-553639    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-proxy-lchdx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-scheduler-pause-553639             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node pause-553639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node pause-553639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)  kubelet          Node pause-553639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s                kubelet          Node pause-553639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s                kubelet          Node pause-553639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s                kubelet          Node pause-553639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-553639 event: Registered Node pause-553639 in Controller
	
	
	==> dmesg <==
	[Apr29 13:44] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.071297] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.281171] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.843466] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.233629] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.082079] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.533717] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.013143] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +8.000756] kauditd_printk_skb: 90 callbacks suppressed
	[ +20.447397] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.205994] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.324017] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.228518] systemd-fstab-generator[2885]: Ignoring "noauto" option for root device
	[  +0.424669] systemd-fstab-generator[2914]: Ignoring "noauto" option for root device
	[Apr29 13:46] systemd-fstab-generator[3170]: Ignoring "noauto" option for root device
	[  +0.106938] kauditd_printk_skb: 174 callbacks suppressed
	[  +5.924555] kauditd_printk_skb: 86 callbacks suppressed
	[  +2.786839] systemd-fstab-generator[3896]: Ignoring "noauto" option for root device
	[Apr29 13:50] kauditd_printk_skb: 45 callbacks suppressed
	[Apr29 13:51] systemd-fstab-generator[5629]: Ignoring "noauto" option for root device
	[  +1.585094] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.033710] systemd-fstab-generator[5962]: Ignoring "noauto" option for root device
	[  +0.097060] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.744731] systemd-fstab-generator[6170]: Ignoring "noauto" option for root device
	[  +0.098887] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [483c0763a8a4c42bf3e6906715a2565f8a0252a1edaea140aaecb7c68122d493] <==
	{"level":"info","ts":"2024-04-29T13:51:21.261937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 switched to configuration voters=(4446367452146456582)"}
	{"level":"info","ts":"2024-04-29T13:51:21.270175Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"43b9305115fb250d","local-member-id":"3db4aba3ce0c5806","added-peer-id":"3db4aba3ce0c5806","added-peer-peer-urls":["https://192.168.61.170:2380"]}
	{"level":"info","ts":"2024-04-29T13:51:21.326433Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:51:21.326741Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3db4aba3ce0c5806","initial-advertise-peer-urls":["https://192.168.61.170:2380"],"listen-peer-urls":["https://192.168.61.170:2380"],"advertise-client-urls":["https://192.168.61.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:51:21.32679Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:51:21.327062Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.170:2380"}
	{"level":"info","ts":"2024-04-29T13:51:21.327098Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.170:2380"}
	{"level":"info","ts":"2024-04-29T13:51:21.509056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T13:51:21.509119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T13:51:21.509162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 received MsgPreVoteResp from 3db4aba3ce0c5806 at term 1"}
	{"level":"info","ts":"2024-04-29T13:51:21.509176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.509182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 received MsgVoteResp from 3db4aba3ce0c5806 at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.50919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3db4aba3ce0c5806 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.509197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3db4aba3ce0c5806 elected leader 3db4aba3ce0c5806 at term 2"}
	{"level":"info","ts":"2024-04-29T13:51:21.515141Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:51:21.517475Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3db4aba3ce0c5806","local-member-attributes":"{Name:pause-553639 ClientURLs:[https://192.168.61.170:2379]}","request-path":"/0/members/3db4aba3ce0c5806/attributes","cluster-id":"43b9305115fb250d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:51:21.517773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:51:21.522271Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:51:21.533056Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:51:21.533182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:51:21.543752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T13:51:21.552491Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.170:2379"}
	{"level":"info","ts":"2024-04-29T13:51:21.544244Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"43b9305115fb250d","local-member-id":"3db4aba3ce0c5806","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:51:21.561281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:51:21.561428Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 13:51:50 up 8 min,  0 users,  load average: 0.92, 0.57, 0.31
	Linux pause-553639 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3699f70dd6052f1ce7e2655fc6844acd3afd795dc04da39b408b1cbb6c45d38e] <==
	I0429 13:51:23.999377       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 13:51:23.999558       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 13:51:24.000035       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 13:51:24.037353       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0429 13:51:24.059892       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 13:51:24.064958       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 13:51:24.065043       1 policy_source.go:224] refreshing policies
	E0429 13:51:24.092899       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0429 13:51:24.106634       1 controller.go:615] quota admission added evaluator for: namespaces
	I0429 13:51:24.298823       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 13:51:24.908564       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 13:51:24.921329       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 13:51:24.921429       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 13:51:25.901598       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 13:51:25.979912       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 13:51:26.132027       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 13:51:26.142198       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.170]
	I0429 13:51:26.148791       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 13:51:26.157608       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 13:51:26.957125       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 13:51:27.118752       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 13:51:27.173028       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 13:51:27.208396       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 13:51:40.906632       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 13:51:41.152248       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5bf11cf3703a3e0efca7473bef0134f9dc4f6ba6b665e97bfc71e2b2ad8cd48a] <==
	I0429 13:51:40.242621       1 shared_informer.go:320] Caches are synced for PV protection
	I0429 13:51:40.246427       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 13:51:40.247267       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0429 13:51:40.249329       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0429 13:51:40.249412       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0429 13:51:40.249439       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0429 13:51:40.249474       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0429 13:51:40.297660       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 13:51:40.306318       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 13:51:40.312232       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 13:51:40.347449       1 shared_informer.go:320] Caches are synced for disruption
	I0429 13:51:40.370483       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 13:51:40.404273       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 13:51:40.832713       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:51:40.833208       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 13:51:40.877292       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:51:41.358129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="441.597162ms"
	I0429 13:51:41.376956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.63227ms"
	I0429 13:51:41.403704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.601746ms"
	I0429 13:51:41.403926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.083µs"
	I0429 13:51:44.332268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="381.254µs"
	I0429 13:51:44.436571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.518258ms"
	I0429 13:51:44.436863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130.409µs"
	I0429 13:51:44.468473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.68009ms"
	I0429 13:51:44.469062       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="171.823µs"
	
	
	==> kube-proxy [fdbe40c1373bf43c2fe4f6090d1f79f194d54670a592c9067913023c542e690c] <==
	I0429 13:51:43.914784       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:51:43.929056       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.170"]
	I0429 13:51:43.985502       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:51:43.985567       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:51:43.985587       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:51:43.990945       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:51:43.991847       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:51:43.991887       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:51:43.993819       1 config.go:192] "Starting service config controller"
	I0429 13:51:43.993866       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:51:43.993907       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:51:43.996407       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:51:43.998835       1 config.go:319] "Starting node config controller"
	I0429 13:51:43.998908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:51:44.094628       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:51:44.097041       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:51:44.099434       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a625f9cc9cb850a4dcd518594a6f579cd61e05e0fbbee934912b04de1cb7d0d8] <==
	W0429 13:51:24.891932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 13:51:24.892082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 13:51:24.952649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 13:51:24.952769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 13:51:24.980175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 13:51:24.980319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 13:51:25.048071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 13:51:25.048202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 13:51:25.282635       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 13:51:25.282885       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:51:25.305089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 13:51:25.305223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 13:51:25.381775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 13:51:25.381835       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 13:51:25.386168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 13:51:25.386232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 13:51:25.444504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 13:51:25.444567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 13:51:25.469235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 13:51:25.469304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 13:51:25.469260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 13:51:25.469490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 13:51:25.493557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 13:51:25.493699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0429 13:51:27.489174       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:51:41 pause-553639 kubelet[5969]: E0429 13:51:41.191497    5969 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-553639" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-553639' and this object
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277756    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-proxy\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277815    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp64h\" (UniqueName: \"kubernetes.io/projected/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-api-access-vp64h\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277833    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b56b310e-2281-4ff2-a3c1-9c6d3e340464-xtables-lock\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.277850    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b56b310e-2281-4ff2-a3c1-9c6d3e340464-lib-modules\") pod \"kube-proxy-lchdx\" (UID: \"b56b310e-2281-4ff2-a3c1-9c6d3e340464\") " pod="kube-system/kube-proxy-lchdx"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.328625    5969 topology_manager.go:215] "Topology Admit Handler" podUID="0b51d117-6754-4d5a-8191-6376818cd778" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xfhkh"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.353685    5969 topology_manager.go:215] "Topology Admit Handler" podUID="ae828405-af7f-4d81-89db-04f5a8b615b8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qbcb2"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378511    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b51d117-6754-4d5a-8191-6376818cd778-config-volume\") pod \"coredns-7db6d8ff4d-xfhkh\" (UID: \"0b51d117-6754-4d5a-8191-6376818cd778\") " pod="kube-system/coredns-7db6d8ff4d-xfhkh"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378605    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae828405-af7f-4d81-89db-04f5a8b615b8-config-volume\") pod \"coredns-7db6d8ff4d-qbcb2\" (UID: \"ae828405-af7f-4d81-89db-04f5a8b615b8\") " pod="kube-system/coredns-7db6d8ff4d-qbcb2"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378653    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqnx\" (UniqueName: \"kubernetes.io/projected/0b51d117-6754-4d5a-8191-6376818cd778-kube-api-access-crqnx\") pod \"coredns-7db6d8ff4d-xfhkh\" (UID: \"0b51d117-6754-4d5a-8191-6376818cd778\") " pod="kube-system/coredns-7db6d8ff4d-xfhkh"
	Apr 29 13:51:41 pause-553639 kubelet[5969]: I0429 13:51:41.378704    5969 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2sxb\" (UniqueName: \"kubernetes.io/projected/ae828405-af7f-4d81-89db-04f5a8b615b8-kube-api-access-h2sxb\") pod \"coredns-7db6d8ff4d-qbcb2\" (UID: \"ae828405-af7f-4d81-89db-04f5a8b615b8\") " pod="kube-system/coredns-7db6d8ff4d-qbcb2"
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.396253    5969 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.396333    5969 projected.go:200] Error preparing data for projected volume kube-api-access-vp64h for pod kube-system/kube-proxy-lchdx: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.396442    5969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-api-access-vp64h podName:b56b310e-2281-4ff2-a3c1-9c6d3e340464 nodeName:}" failed. No retries permitted until 2024-04-29 13:51:42.896402969 +0000 UTC m=+15.993990662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vp64h" (UniqueName: "kubernetes.io/projected/b56b310e-2281-4ff2-a3c1-9c6d3e340464-kube-api-access-vp64h") pod "kube-proxy-lchdx" (UID: "b56b310e-2281-4ff2-a3c1-9c6d3e340464") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499373    5969 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499441    5969 projected.go:200] Error preparing data for projected volume kube-api-access-crqnx for pod kube-system/coredns-7db6d8ff4d-xfhkh: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499536    5969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b51d117-6754-4d5a-8191-6376818cd778-kube-api-access-crqnx podName:0b51d117-6754-4d5a-8191-6376818cd778 nodeName:}" failed. No retries permitted until 2024-04-29 13:51:42.999508328 +0000 UTC m=+16.097096022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-crqnx" (UniqueName: "kubernetes.io/projected/0b51d117-6754-4d5a-8191-6376818cd778-kube-api-access-crqnx") pod "coredns-7db6d8ff4d-xfhkh" (UID: "0b51d117-6754-4d5a-8191-6376818cd778") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499661    5969 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499693    5969 projected.go:200] Error preparing data for projected volume kube-api-access-h2sxb for pod kube-system/coredns-7db6d8ff4d-qbcb2: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:42 pause-553639 kubelet[5969]: E0429 13:51:42.499721    5969 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae828405-af7f-4d81-89db-04f5a8b615b8-kube-api-access-h2sxb podName:ae828405-af7f-4d81-89db-04f5a8b615b8 nodeName:}" failed. No retries permitted until 2024-04-29 13:51:42.999710992 +0000 UTC m=+16.097298697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h2sxb" (UniqueName: "kubernetes.io/projected/ae828405-af7f-4d81-89db-04f5a8b615b8-kube-api-access-h2sxb") pod "coredns-7db6d8ff4d-qbcb2" (UID: "ae828405-af7f-4d81-89db-04f5a8b615b8") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 13:51:44 pause-553639 kubelet[5969]: I0429 13:51:44.354328    5969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qbcb2" podStartSLOduration=3.354283051 podStartE2EDuration="3.354283051s" podCreationTimestamp="2024-04-29 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:51:44.330090197 +0000 UTC m=+17.427677907" watchObservedRunningTime="2024-04-29 13:51:44.354283051 +0000 UTC m=+17.451870756"
	Apr 29 13:51:44 pause-553639 kubelet[5969]: I0429 13:51:44.355654    5969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lchdx" podStartSLOduration=3.355634475 podStartE2EDuration="3.355634475s" podCreationTimestamp="2024-04-29 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:51:44.355012533 +0000 UTC m=+17.452600243" watchObservedRunningTime="2024-04-29 13:51:44.355634475 +0000 UTC m=+17.453222179"
	Apr 29 13:51:44 pause-553639 kubelet[5969]: I0429 13:51:44.449381    5969 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xfhkh" podStartSLOduration=3.4492777009999998 podStartE2EDuration="3.449277701s" podCreationTimestamp="2024-04-29 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 13:51:44.407183669 +0000 UTC m=+17.504771381" watchObservedRunningTime="2024-04-29 13:51:44.449277701 +0000 UTC m=+17.546865414"
	Apr 29 13:51:47 pause-553639 kubelet[5969]: I0429 13:51:47.593383    5969 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 13:51:47 pause-553639 kubelet[5969]: I0429 13:51:47.594835    5969 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-553639 -n pause-553639
helpers_test.go:261: (dbg) Run:  kubectl --context pause-553639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (438.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7200.081s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-455748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 13:56:19.253161  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 13:56:31.435452  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/enable-default-cni-807154/client.crt: no such file or directory
E0429 13:57:22.018240  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/flannel-807154/client.crt: no such file or directory
E0429 13:57:29.283189  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/bridge-807154/client.crt: no such file or directory
E0429 13:57:38.722102  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/calico-807154/client.crt: no such file or directory
E0429 13:57:42.303255  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 13:57:59.076035  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/custom-flannel-807154/client.crt: no such file or directory
E0429 13:58:06.406539  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/calico-807154/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (18m43s)
	TestNetworkPlugins/group (7m59s)
	TestStartStop (15m14s)
	TestStartStop/group/default-k8s-diff-port (6m24s)
	TestStartStop/group/default-k8s-diff-port/serial (6m24s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (2m3s)
	TestStartStop/group/embed-certs (7m59s)
	TestStartStop/group/embed-certs/serial (7m59s)
	TestStartStop/group/embed-certs/serial/SecondStart (3m46s)
	TestStartStop/group/no-preload (8m2s)
	TestStartStop/group/no-preload/serial (8m2s)
	TestStartStop/group/no-preload/serial/SecondStart (3m18s)
	TestStartStop/group/old-k8s-version (8m59s)
	TestStartStop/group/old-k8s-version/serial (8m59s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (2m36s)

                                                
                                                
goroutine 3382 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000752680, 0xc0012d7bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000908030, {0x4955920, 0x2b, 0x2b}, {0xc00087b5c0?, 0xc0012d7c30?, 0x4a11cc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0009fcd20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0009fcd20)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006a3d80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3364 [syscall, 2 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xe13ce, 0xc002b26ab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc002602360)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc002602360)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a1f080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a1f080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0007529c0, 0xc000a1f080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e6c0, 0xc0003a61c0}, 0xc0007529c0, {0xc0027d20e0, 0x1c}, {0x0?, 0xc0025f8f60?}, {0x552353?, 0x4a26cf?}, {0xc000a18600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0007529c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0007529c0, 0xc00277e080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3024
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2341 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000932c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2198 [chan receive, 6 minutes]:
testing.(*T).Run(0xc002502d00, {0x26547d3?, 0x0?}, 0xc002ed6100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002502d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002502d00, 0xc000876380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 350 [chan receive, 73 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002128740, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2197 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc000662690)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002502b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002502b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002502b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002502b60, 0xc000876340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 23 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 22
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2615 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc000093750, 0xc000967f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0x7?, 0xc000093750, 0xc000093798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc00235a820?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000937d0?, 0x594064?, 0xc00223c150?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2577
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 158 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f536162c820, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0007d6700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0007d6700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0009148c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009148c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007ea0f0, {0x36316a0, 0xc0009148c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007ea0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc002502000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 155
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 597 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a1b340, 0xc002a2c2a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 287
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1670 [chan receive, 18 minutes]:
testing.(*T).Run(0xc000972000, {0x2653246?, 0x55249c?}, 0xc002bbf9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000972000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000972000, 0x30bffe0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2300 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00238ef00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2299
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2969 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2968
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2195 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0025021a0, 0x30c0200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1710
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3311 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f536162c348, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0020a9980?, 0xc002b0412c?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0020a9980, {0xc002b0412c, 0x3ed4, 0x3ed4})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0024e0480, {0xc002b0412c?, 0x539600?, 0x3e34?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0031c2870, {0x36194a0, 0xc0006a6678})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc0031c2870}, {0x36194a0, 0xc0006a6678}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0024e0480?, {0x36195e0, 0xc0031c2870})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0024e0480, {0x36195e0, 0xc0031c2870})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc0031c2870}, {0x3619500, 0xc0024e0480}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00088e400?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3309
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2526 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00290a600, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2524
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3286 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f536162c250, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0039dc060?, 0xc00287ea2f?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0039dc060, {0xc00287ea2f, 0x5d1, 0x5d1})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a20078, {0xc00287ea2f?, 0xc000096530?, 0x22f?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00223c1b0, {0x36194a0, 0xc0024e0280})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc00223c1b0}, {0x36194a0, 0xc0024e0280}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002a20078?, {0x36195e0, 0xc00223c1b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002a20078, {0x36195e0, 0xc00223c1b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc00223c1b0}, {0x3619500, 0xc002a20078}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002208a20?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3285
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2280 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2279
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2199 [chan receive, 9 minutes]:
testing.(*T).Run(0xc002502ea0, {0x26547d3?, 0x0?}, 0xc00277e280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002502ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002502ea0, 0xc000876540)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2967 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0023249d0, 0x10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0028af140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002324a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001307eb0, {0x361aa00, 0xc0030c3590}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001307eb0, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2906
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2278 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000a22750, 0x12)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00238ecc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a22780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002cb8010, {0x361aa00, 0xc002e6e240}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002cb8010, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3201 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c11d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3348 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f536162bc80, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00246e720?, 0xc002a3426d?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00246e720, {0xc002a3426d, 0x593, 0x593})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0024e0708, {0xc002a3426d?, 0x21a0020?, 0x20a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0031c2c60, {0x36194a0, 0xc001302668})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc0031c2c60}, {0x36194a0, 0xc001302668}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0024e0708?, {0x36195e0, 0xc0031c2c60})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0024e0708, {0x36195e0, 0xc0031c2c60})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc0031c2c60}, {0x3619500, 0xc0024e0708}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00088e680?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3347
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3213 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3212
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2301 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a22780, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2299
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2279 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc00214bf50, 0xc00214bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0xa0?, 0xc00214bf50, 0xc00214bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc0029484e0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00214bfd0?, 0x594064?, 0xc0026905a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2769 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2768
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2525 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0039dcf60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2524
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2845 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002b10390, 0x10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0039dc840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b103c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000892030, {0x361aa00, 0xc0030c2060}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000892030, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 349 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00238ea80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2916 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b103c0, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2914
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 459 [chan send, 73 minutes]:
os/exec.(*Cmd).watchCtx(0xc00238db80, 0xc0025e2660)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 458
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2968 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc00251ff50, 0xc00251ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0x0?, 0xc00251ff50, 0xc00251ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0x305b?, 0xc000061800?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00251ffd0?, 0x594064?, 0x684f4?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2906
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3347 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xe12cb, 0xc002b27ab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0026b4660)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0026b4660)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00245a6e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00245a6e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0029496c0, 0xc00245a6e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e6c0, 0xc000190c40}, 0xc0029496c0, {0xc0025465d0, 0x16}, {0x0?, 0xc002520f60?}, {0x552353?, 0x4a26cf?}, {0xc0020d2000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0029496c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0029496c0, 0xc00088e680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2897
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3367 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a1f080, 0xc002208600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3364
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2196 [chan receive, 9 minutes]:
testing.(*T).Run(0xc0025029c0, {0x26547d3?, 0x0?}, 0xc002718000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0025029c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0025029c0, 0xc000876300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3285 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xe10ae, 0xc003183ab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0029161e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0029161e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00289a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00289a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00235a000, 0xc00289a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e6c0, 0xc000891110}, 0xc00235a000, {0xc0020d7ab8, 0x12}, {0x0?, 0xc002147760?}, {0x552353?, 0x4a26cf?}, {0xc0037e0f00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00235a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00235a000, 0xc0001c3400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3119
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2561 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc0020b3f50, 0xc0020b3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0x0?, 0xc0020b3f50, 0xc0020b3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc00235a4e0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0027c2470?, 0x9ac965?, 0xc0020b3fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2526
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 802 [select, 71 minutes]:
net/http.(*persistConn).writeLoop(0xc00288ab40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 755
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3310 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f536162bd78, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0020a98c0?, 0xc002064df4?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0020a98c0, {0xc002064df4, 0x20c, 0x20c})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0024e0450, {0xc002064df4?, 0xc002521d30?, 0x43?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0031c2840, {0x36194a0, 0xc001302558})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc0031c2840}, {0x36194a0, 0xc001302558}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0024e0450?, {0x36195e0, 0xc0031c2840})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0024e0450, {0x36195e0, 0xc0031c2840})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc0031c2840}, {0x3619500, 0xc0024e0450}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002938420?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3309
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2897 [chan receive, 4 minutes]:
testing.(*T).Run(0xc002948000, {0x2660530?, 0x60400000004?}, 0xc00088e680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002948000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002948000, 0xc002718000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2576 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc003606180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2607
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 402 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002128710, 0x22)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00238e960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002128740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00021b460, {0x361aa00, 0xc0020d4db0}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00021b460, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 350
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 403 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc000509f50, 0xc00016ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0xee?, 0xc000509f50, 0xc000509f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc000972ea0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x553be5?, 0xc000972ea0?, 0xc000a22ac0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 350
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 404 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 403
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2847 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2846
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2614 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0027b4bd0, 0x11)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc003606060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027b4c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002cb9600, {0x361aa00, 0xc0027d7e60}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002cb9600, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2577
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 534 [chan send, 73 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026d2580, 0xc002690fc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 533
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 753 [select, 71 minutes]:
net/http.(*persistConn).readLoop(0xc00288ab40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 755
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2342 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021290c0, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2905 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0028af260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2936
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3212 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc002149750, 0xc002149798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0x20?, 0xc002149750, 0xc002149798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc002948820?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc002a47a20?, 0xc002938720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3244
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2332 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002129090, 0x12)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0009324e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021290c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00092d790, {0x361aa00, 0xc0023b9740}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00092d790, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3365 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f5360155220, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002832360?, 0xc002a35215?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002832360, {0xc002a35215, 0x5eb, 0x5eb})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a20060, {0xc002a35215?, 0x21a0020?, 0x215?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0027d6180, {0x36194a0, 0xc0006a6120})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc0027d6180}, {0x36194a0, 0xc0006a6120}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002a20060?, {0x36195e0, 0xc0027d6180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002a20060, {0x36195e0, 0xc0027d6180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc0027d6180}, {0x3619500, 0xc002a20060}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00277e080?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3364
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3185 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3184
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2333 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0xd0?, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc001b98360?, 0xc00260e980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000947d0?, 0x594064?, 0xc0028ba900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2768 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc00251e750, 0xc00251e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0x7?, 0xc00251e750, 0xc00251e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc00235ba00?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00251e7d0?, 0x594064?, 0xc0024b5e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2800
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2560 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00290a5d0, 0x12)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0039dce40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00290a600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0027c24a0, {0x361aa00, 0xc0027d60c0}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027c24a0, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2526
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2201 [chan receive, 9 minutes]:
testing.(*T).Run(0xc0009724e0, {0x26547d3?, 0x0?}, 0xc00260fe80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0009724e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0009724e0, 0xc0008769c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3288 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00289a000, 0xc0007cfb00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3285
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3119 [chan receive, 4 minutes]:
testing.(*T).Run(0xc000752820, {0x2660530?, 0x60400000004?}, 0xc0001c3400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000752820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000752820, 0xc00260fe80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2201
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2846 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc002522750, 0xc002522798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0xa8?, 0xc002522750, 0xc002522798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc0025227b0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99de1b?, 0xc003ac6a80?, 0xc002522838?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2562 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2561
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1710 [chan receive, 16 minutes]:
testing.(*T).Run(0xc000972820, {0x2653246?, 0x552353?}, 0x30c0200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000972820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000972820, 0x30c0028)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2906 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002324a00, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2936
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3183 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00251abd0, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c11c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00251ac00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002818b40, {0x361aa00, 0xc002881260}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002818b40, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3234
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1913 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000662690)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0020a3ba0, 0xc002bbf9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2334 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2333
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3123 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00235b1e0, {0x2660530?, 0x60400000004?}, 0xc00088e400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00235b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00235b1e0, 0xc00277e280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2199
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3184 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e880, 0xc0000604e0}, 0xc002d06f50, 0xc002d06f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e880, 0xc0000604e0}, 0xee?, 0xc002d06f50, 0xc002d06f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e880?, 0xc0000604e0?}, 0xc0025f4fb0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0025f4fd0?, 0x594064?, 0xc00260ec80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3234
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3366 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f536162c158, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0028325a0?, 0xc00203ab82?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0028325a0, {0xc00203ab82, 0x147e, 0x147e})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a20080, {0xc00203ab82?, 0xc0025f3530?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0027d61b0, {0x36194a0, 0xc0024e0220})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc0027d61b0}, {0x36194a0, 0xc0024e0220}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002a20080?, {0x36195e0, 0xc0027d61b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002a20080, {0x36195e0, 0xc0027d61b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc0027d61b0}, {0x3619500, 0xc002a20080}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0029381e0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3364
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3234 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00251ac00, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3024 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000973520, {0x2660530?, 0x60400000004?}, 0xc00277e080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000973520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000973520, 0xc002ed6100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2198
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2577 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027b4c40, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2607
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2616 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2615
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3243 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00238f800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3207
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2800 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00251aa40, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2806
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3350 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00245a6e0, 0xc002938cc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3347
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3349 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f536162c060, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00246e7e0?, 0xc002a54bb6?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00246e7e0, {0xc002a54bb6, 0x144a, 0x144a})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0024e0740, {0xc002a54bb6?, 0xc002523505?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0031c2cc0, {0x36194a0, 0xc001302678})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc0031c2cc0}, {0x36194a0, 0xc001302678}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0024e0740?, {0x36195e0, 0xc0031c2cc0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0024e0740, {0x36195e0, 0xc0031c2cc0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc0031c2cc0}, {0x3619500, 0xc0024e0740}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002209440?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3347
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3287 [IO wait]:
internal/poll.runtime_pollWait(0x7f536162c728, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0039dc120?, 0xc002b1ef88?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0039dc120, {0xc002b1ef88, 0x3078, 0x3078})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a20090, {0xc002b1ef88?, 0x21a0020?, 0x3e08?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00223c1e0, {0x36194a0, 0xc001302500})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36195e0, 0xc00223c1e0}, {0x36194a0, 0xc001302500}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002a20090?, {0x36195e0, 0xc00223c1e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002a20090, {0x36195e0, 0xc00223c1e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36195e0, 0xc00223c1e0}, {0x3619500, 0xc002a20090}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001c3400?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3285
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2915 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0039dc960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2914
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3309 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xe1186, 0xc00016cab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0026b4480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0026b4480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00245a420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00245a420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002949520, 0xc00245a420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e6c0, 0xc00046ca80}, 0xc002949520, {0xc0023b3a70, 0x11}, {0x0?, 0xc00251f760?}, {0x552353?, 0x4a26cf?}, {0xc00232ea00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002949520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002949520, 0xc00088e400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2799 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0028fff20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2806
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2767 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00251aa10, 0x10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0028ffe00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00251aa40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000892180, {0x361aa00, 0xc0023b9440}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000892180, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2800
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3312 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00245a420, 0xc0029385a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3309
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3244 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002128240, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3207
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3211 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc002128210, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00238f6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002128240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002c1af90, {0x361aa00, 0xc0026b86f0}, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002c1af90, 0x3b9aca00, 0x0, 0x1, 0xc0000604e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3244
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                    

Test pass (163/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.84
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.30.0/json-events 4.02
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.16
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
22 TestOffline 101.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
28 TestCertOptions 88.02
29 TestCertExpiration 313.23
31 TestForceSystemdFlag 47.18
32 TestForceSystemdEnv 72.35
34 TestKVMDriverInstallOrUpdate 3.32
38 TestErrorSpam/setup 44.87
39 TestErrorSpam/start 0.42
40 TestErrorSpam/status 0.86
41 TestErrorSpam/pause 1.73
42 TestErrorSpam/unpause 1.76
43 TestErrorSpam/stop 5.94
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 61.13
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 45.5
50 TestFunctional/serial/KubeContext 0.05
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.41
55 TestFunctional/serial/CacheCmd/cache/add_local 1.57
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
57 TestFunctional/serial/CacheCmd/cache/list 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
60 TestFunctional/serial/CacheCmd/cache/delete 0.13
61 TestFunctional/serial/MinikubeKubectlCmd 0.13
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
63 TestFunctional/serial/ExtraConfig 286.75
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.29
66 TestFunctional/serial/LogsFileCmd 1.27
67 TestFunctional/serial/InvalidService 4.23
69 TestFunctional/parallel/ConfigCmd 0.46
70 TestFunctional/parallel/DashboardCmd 27.44
71 TestFunctional/parallel/DryRun 0.53
72 TestFunctional/parallel/InternationalLanguage 0.17
73 TestFunctional/parallel/StatusCmd 1.29
77 TestFunctional/parallel/ServiceCmdConnect 11.59
78 TestFunctional/parallel/AddonsCmd 0.18
79 TestFunctional/parallel/PersistentVolumeClaim 49.98
81 TestFunctional/parallel/SSHCmd 0.5
82 TestFunctional/parallel/CpCmd 1.58
83 TestFunctional/parallel/MySQL 25.87
84 TestFunctional/parallel/FileSync 0.26
85 TestFunctional/parallel/CertSync 1.51
89 TestFunctional/parallel/NodeLabels 0.08
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
93 TestFunctional/parallel/License 0.21
103 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
105 TestFunctional/parallel/ProfileCmd/profile_list 0.4
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
107 TestFunctional/parallel/MountCmd/any-port 6.98
108 TestFunctional/parallel/MountCmd/specific-port 1.67
109 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
110 TestFunctional/parallel/ServiceCmd/List 0.67
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
116 TestFunctional/parallel/ServiceCmd/Format 0.38
117 TestFunctional/parallel/Version/short 0.06
118 TestFunctional/parallel/Version/components 0.88
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
123 TestFunctional/parallel/ImageCommands/ImageBuild 2.3
124 TestFunctional/parallel/ImageCommands/Setup 0.87
125 TestFunctional/parallel/ServiceCmd/URL 0.4
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.2
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.46
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.69
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.78
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.22
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.35
133 TestFunctional/delete_addon-resizer_images 0.07
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
139 TestMultiControlPlane/serial/StartCluster 203.12
140 TestMultiControlPlane/serial/DeployApp 5.29
141 TestMultiControlPlane/serial/PingHostFromPods 1.45
142 TestMultiControlPlane/serial/AddWorkerNode 48.05
143 TestMultiControlPlane/serial/NodeLabels 0.08
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
145 TestMultiControlPlane/serial/CopyFile 14.53
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.43
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.58
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.41
154 TestMultiControlPlane/serial/RestartCluster 365.33
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
156 TestMultiControlPlane/serial/AddSecondaryNode 72.03
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
161 TestJSONOutput/start/Command 99.04
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.82
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.72
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.43
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.26
189 TestMainNoArgs 0.07
190 TestMinikubeProfile 96.7
193 TestMountStart/serial/StartWithMountFirst 27.97
194 TestMountStart/serial/VerifyMountFirst 0.44
195 TestMountStart/serial/StartWithMountSecond 28.12
196 TestMountStart/serial/VerifyMountSecond 0.43
197 TestMountStart/serial/DeleteFirst 0.94
198 TestMountStart/serial/VerifyMountPostDelete 0.44
199 TestMountStart/serial/Stop 1.39
200 TestMountStart/serial/RestartStopped 24.77
201 TestMountStart/serial/VerifyMountPostStop 0.44
204 TestMultiNode/serial/FreshStart2Nodes 103.57
205 TestMultiNode/serial/DeployApp2Nodes 4.24
206 TestMultiNode/serial/PingHostFrom2Pods 0.99
207 TestMultiNode/serial/AddNode 43.98
208 TestMultiNode/serial/MultiNodeLabels 0.07
209 TestMultiNode/serial/ProfileList 0.26
210 TestMultiNode/serial/CopyFile 8.22
211 TestMultiNode/serial/StopNode 2.54
212 TestMultiNode/serial/StartAfterStop 29.51
214 TestMultiNode/serial/DeleteNode 2.47
216 TestMultiNode/serial/RestartMultiNode 170.96
217 TestMultiNode/serial/ValidateNameConflict 47.77
224 TestScheduledStopUnix 116.48
228 TestRunningBinaryUpgrade 178.51
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
234 TestNoKubernetes/serial/StartWithK8s 129.99
246 TestNoKubernetes/serial/StartWithStopK8s 41.11
247 TestNoKubernetes/serial/Start 49.43
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
249 TestNoKubernetes/serial/ProfileList 1.71
250 TestNoKubernetes/serial/Stop 2.32
251 TestNoKubernetes/serial/StartNoArgs 21.78
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
253 TestStoppedBinaryUpgrade/Setup 0.55
254 TestStoppedBinaryUpgrade/Upgrade 118.43
263 TestPause/serial/Start 90.82
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-127910 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-127910 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.841476682s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-127910
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-127910: exit status 85 (80.295685ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-127910 | jenkins | v1.33.0 | 29 Apr 24 11:58 UTC |          |
	|         | -p download-only-127910        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:58:16
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:58:16.649010  854672 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:58:16.649310  854672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:58:16.649320  854672 out.go:304] Setting ErrFile to fd 2...
	I0429 11:58:16.649325  854672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:58:16.649533  854672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	W0429 11:58:16.649657  854672 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18773-847310/.minikube/config/config.json: open /home/jenkins/minikube-integration/18773-847310/.minikube/config/config.json: no such file or directory
	I0429 11:58:16.650308  854672 out.go:298] Setting JSON to true
	I0429 11:58:16.651282  854672 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":74442,"bootTime":1714317455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 11:58:16.651415  854672 start.go:139] virtualization: kvm guest
	I0429 11:58:16.653898  854672 out.go:97] [download-only-127910] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 11:58:16.655411  854672 out.go:169] MINIKUBE_LOCATION=18773
	W0429 11:58:16.654053  854672 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 11:58:16.654101  854672 notify.go:220] Checking for updates...
	I0429 11:58:16.658229  854672 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:58:16.659657  854672 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 11:58:16.661164  854672 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 11:58:16.662557  854672 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0429 11:58:16.665086  854672 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 11:58:16.665386  854672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:58:16.705023  854672 out.go:97] Using the kvm2 driver based on user configuration
	I0429 11:58:16.705069  854672 start.go:297] selected driver: kvm2
	I0429 11:58:16.705077  854672 start.go:901] validating driver "kvm2" against <nil>
	I0429 11:58:16.705513  854672 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:58:16.705606  854672 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-847310/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 11:58:16.722705  854672 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 11:58:16.722807  854672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:58:16.723286  854672 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0429 11:58:16.723493  854672 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 11:58:16.723557  854672 cni.go:84] Creating CNI manager for ""
	I0429 11:58:16.723569  854672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 11:58:16.723578  854672 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 11:58:16.723641  854672 start.go:340] cluster config:
	{Name:download-only-127910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-127910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:58:16.723833  854672 iso.go:125] acquiring lock: {Name:mk86e9cd005d09998272ae59568d4d340a1c61b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:58:16.725773  854672 out.go:97] Downloading VM boot image ...
	I0429 11:58:16.725809  854672 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 11:58:21.624733  854672 out.go:97] Starting "download-only-127910" primary control-plane node in "download-only-127910" cluster
	I0429 11:58:21.624802  854672 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 11:58:21.656873  854672 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 11:58:21.656937  854672 cache.go:56] Caching tarball of preloaded images
	I0429 11:58:21.657137  854672 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 11:58:21.658841  854672 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 11:58:21.658865  854672 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0429 11:58:21.690629  854672 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 11:58:25.866745  854672 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0429 11:58:25.866868  854672 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18773-847310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-127910 host does not exist
	  To start a cluster, run: "minikube start -p download-only-127910"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-127910
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (4.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-439090 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-439090 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.019381357s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (4.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-439090
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-439090: exit status 85 (79.910356ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-127910 | jenkins | v1.33.0 | 29 Apr 24 11:58 UTC |                     |
	|         | -p download-only-127910        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 11:58 UTC | 29 Apr 24 11:58 UTC |
	| delete  | -p download-only-127910        | download-only-127910 | jenkins | v1.33.0 | 29 Apr 24 11:58 UTC | 29 Apr 24 11:58 UTC |
	| start   | -o=json --download-only        | download-only-439090 | jenkins | v1.33.0 | 29 Apr 24 11:58 UTC |                     |
	|         | -p download-only-439090        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:58:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:58:27.881530  854856 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:58:27.881808  854856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:58:27.881817  854856 out.go:304] Setting ErrFile to fd 2...
	I0429 11:58:27.881821  854856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:58:27.882035  854856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 11:58:27.882694  854856 out.go:298] Setting JSON to true
	I0429 11:58:27.883664  854856 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":74453,"bootTime":1714317455,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 11:58:27.883738  854856 start.go:139] virtualization: kvm guest
	I0429 11:58:27.886113  854856 out.go:97] [download-only-439090] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 11:58:27.887877  854856 out.go:169] MINIKUBE_LOCATION=18773
	I0429 11:58:27.886301  854856 notify.go:220] Checking for updates...
	I0429 11:58:27.890629  854856 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:58:27.892270  854856 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 11:58:27.893849  854856 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 11:58:27.895223  854856 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-439090 host does not exist
	  To start a cluster, run: "minikube start -p download-only-439090"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-439090
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-130934 --alsologtostderr --binary-mirror http://127.0.0.1:32769 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-130934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-130934
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (101.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-479396 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-479396 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.512518514s)
helpers_test.go:175: Cleaning up "offline-crio-479396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-479396
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-479396: (1.1061156s)
--- PASS: TestOffline (101.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-943107
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-943107: exit status 85 (67.425523ms)

                                                
                                                
-- stdout --
	* Profile "addons-943107" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-943107"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-943107
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-943107: exit status 85 (67.80477ms)

                                                
                                                
-- stdout --
	* Profile "addons-943107" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-943107"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestCertOptions (88.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-943942 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-943942 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m26.567448638s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-943942 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-943942 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-943942 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-943942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-943942
--- PASS: TestCertOptions (88.02s)

                                                
                                    
x
+
TestCertExpiration (313.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-512362 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-512362 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m36.866584229s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-512362 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-512362 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (35.307190008s)
helpers_test.go:175: Cleaning up "cert-expiration-512362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-512362
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-512362: (1.056896177s)
--- PASS: TestCertExpiration (313.23s)

                                                
                                    
x
+
TestForceSystemdFlag (47.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-014153 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-014153 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.740625963s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-014153 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-014153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-014153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-014153: (1.186818685s)
--- PASS: TestForceSystemdFlag (47.18s)

                                                
                                    
x
+
TestForceSystemdEnv (72.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-517756 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-517756 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.281942302s)
helpers_test.go:175: Cleaning up "force-systemd-env-517756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-517756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-517756: (1.070034725s)
--- PASS: TestForceSystemdEnv (72.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.32s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.32s)

                                                
                                    
x
+
TestErrorSpam/setup (44.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-235494 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-235494 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-235494 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-235494 --driver=kvm2  --container-runtime=crio: (44.872301323s)
--- PASS: TestErrorSpam/setup (44.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.94s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 stop: (2.319737754s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 stop: (1.99780392s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-235494 --log_dir /tmp/nospam-235494 stop: (1.623932171s)
--- PASS: TestErrorSpam/stop (5.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18773-847310/.minikube/files/etc/test/nested/copy/854660/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341155 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-341155 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.1296311s)
--- PASS: TestFunctional/serial/StartWithProxy (61.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341155 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-341155 --alsologtostderr -v=8: (45.494378277s)
functional_test.go:659: soft start took 45.495474046s for "functional-341155" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-341155 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 cache add registry.k8s.io/pause:3.1: (1.104076161s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 cache add registry.k8s.io/pause:3.3: (1.141468449s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 cache add registry.k8s.io/pause:latest: (1.161739552s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-341155 /tmp/TestFunctionalserialCacheCmdcacheadd_local2660532538/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cache add minikube-local-cache-test:functional-341155
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 cache add minikube-local-cache-test:functional-341155: (1.169544992s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cache delete minikube-local-cache-test:functional-341155
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-341155
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (245.512579ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 cache reload: (1.07963403s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 kubectl -- --context functional-341155 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-341155 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (286.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341155 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-341155 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m46.750558963s)
functional_test.go:757: restart took 4m46.750739981s for "functional-341155" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (286.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-341155 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 logs: (1.293277632s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 logs --file /tmp/TestFunctionalserialLogsFileCmd1595324230/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 logs --file /tmp/TestFunctionalserialLogsFileCmd1595324230/001/logs.txt: (1.266872832s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-341155 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-341155
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-341155: exit status 115 (323.348189ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.81:30847 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-341155 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 config get cpus: exit status 14 (71.182483ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 config get cpus: exit status 14 (70.229415ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (27.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-341155 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-341155 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 869303: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (27.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-341155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (346.780187ms)

                                                
                                                
-- stdout --
	* [functional-341155] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:46:31.898217  868869 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:46:31.898566  868869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:46:31.898580  868869 out.go:304] Setting ErrFile to fd 2...
	I0429 12:46:31.898586  868869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:46:31.898940  868869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:46:31.899728  868869 out.go:298] Setting JSON to false
	I0429 12:46:31.901242  868869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":77337,"bootTime":1714317455,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:46:31.901331  868869 start.go:139] virtualization: kvm guest
	I0429 12:46:32.006195  868869 out.go:177] * [functional-341155] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:46:32.008022  868869 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:46:32.008050  868869 notify.go:220] Checking for updates...
	I0429 12:46:32.009552  868869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:46:32.011195  868869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:46:32.012696  868869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:46:32.017058  868869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:46:32.042400  868869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:46:32.044723  868869 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:46:32.045437  868869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:46:32.045505  868869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:46:32.064400  868869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39763
	I0429 12:46:32.064903  868869 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:32.065610  868869 main.go:141] libmachine: Using API Version  1
	I0429 12:46:32.065642  868869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:32.066289  868869 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:32.066506  868869 main.go:141] libmachine: (functional-341155) Calling .DriverName
	I0429 12:46:32.066817  868869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:46:32.067270  868869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:46:32.067314  868869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:46:32.090084  868869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33859
	I0429 12:46:32.090632  868869 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:32.091262  868869 main.go:141] libmachine: Using API Version  1
	I0429 12:46:32.091290  868869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:32.091720  868869 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:32.091979  868869 main.go:141] libmachine: (functional-341155) Calling .DriverName
	I0429 12:46:32.131818  868869 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 12:46:32.133222  868869 start.go:297] selected driver: kvm2
	I0429 12:46:32.133240  868869 start.go:901] validating driver "kvm2" against &{Name:functional-341155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-341155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:46:32.133359  868869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:46:32.135575  868869 out.go:177] 
	W0429 12:46:32.136959  868869 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 12:46:32.138015  868869 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341155 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-341155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.264233ms)

                                                
                                                
-- stdout --
	* [functional-341155] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:46:31.684232  868824 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:46:31.684342  868824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:46:31.684346  868824 out.go:304] Setting ErrFile to fd 2...
	I0429 12:46:31.684361  868824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:46:31.684646  868824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 12:46:31.685242  868824 out.go:298] Setting JSON to false
	I0429 12:46:31.686290  868824 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":77337,"bootTime":1714317455,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:46:31.686362  868824 start.go:139] virtualization: kvm guest
	I0429 12:46:31.688934  868824 out.go:177] * [functional-341155] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0429 12:46:31.690429  868824 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:46:31.690482  868824 notify.go:220] Checking for updates...
	I0429 12:46:31.691964  868824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:46:31.693453  868824 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	I0429 12:46:31.694855  868824 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	I0429 12:46:31.696176  868824 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:46:31.697530  868824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:46:31.699710  868824 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:46:31.700317  868824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:46:31.700402  868824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:46:31.718586  868824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0429 12:46:31.719198  868824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:31.719847  868824 main.go:141] libmachine: Using API Version  1
	I0429 12:46:31.719873  868824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:31.720248  868824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:31.720466  868824 main.go:141] libmachine: (functional-341155) Calling .DriverName
	I0429 12:46:31.720735  868824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:46:31.721080  868824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 12:46:31.721132  868824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:46:31.742054  868824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0429 12:46:31.742718  868824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:31.743454  868824 main.go:141] libmachine: Using API Version  1
	I0429 12:46:31.743481  868824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:31.743945  868824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:31.744231  868824 main.go:141] libmachine: (functional-341155) Calling .DriverName
	I0429 12:46:31.783217  868824 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0429 12:46:31.784523  868824 start.go:297] selected driver: kvm2
	I0429 12:46:31.784544  868824 start.go:901] validating driver "kvm2" against &{Name:functional-341155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-341155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:46:31.784701  868824 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:46:31.787026  868824 out.go:177] 
	W0429 12:46:31.788410  868824 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 12:46:31.789686  868824 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-341155 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-341155 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-h9phq" [fdab0c6b-c282-4afe-b522-ec5dd558f703] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-h9phq" [fdab0c6b-c282-4afe-b522-ec5dd558f703] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005471743s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.81:31559
functional_test.go:1671: http://192.168.39.81:31559: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-h9phq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.81:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.81:31559
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7a37b357-ea46-4840-8953-79998fcd6238] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007053107s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-341155 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-341155 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-341155 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-341155 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9b1cbbde-231d-4bee-ab62-5e436e4e21c4] Pending
helpers_test.go:344: "sp-pod" [9b1cbbde-231d-4bee-ab62-5e436e4e21c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9b1cbbde-231d-4bee-ab62-5e436e4e21c4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004313292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-341155 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-341155 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-341155 delete -f testdata/storage-provisioner/pod.yaml: (3.854041292s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-341155 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9b156635-3d42-440b-b6c1-448403cbe35e] Pending
helpers_test.go:344: "sp-pod" [9b156635-3d42-440b-b6c1-448403cbe35e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9b156635-3d42-440b-b6c1-448403cbe35e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.005377164s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-341155 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh -n functional-341155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cp functional-341155:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3334595813/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh -n functional-341155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh -n functional-341155 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-341155 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-kjhbw" [6464637f-4872-4046-8625-3a82db01eb1c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-kjhbw" [6464637f-4872-4046-8625-3a82db01eb1c] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.814955447s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341155 exec mysql-64454c8b5c-kjhbw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341155 exec mysql-64454c8b5c-kjhbw -- mysql -ppassword -e "show databases;": exit status 1 (400.19002ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341155 exec mysql-64454c8b5c-kjhbw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341155 exec mysql-64454c8b5c-kjhbw -- mysql -ppassword -e "show databases;": exit status 1 (199.140391ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341155 exec mysql-64454c8b5c-kjhbw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/854660/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /etc/test/nested/copy/854660/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/854660.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /etc/ssl/certs/854660.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/854660.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /usr/share/ca-certificates/854660.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8546602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /etc/ssl/certs/8546602.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8546602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /usr/share/ca-certificates/8546602.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-341155 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh "sudo systemctl is-active docker": exit status 1 (303.406437ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh "sudo systemctl is-active containerd": exit status 1 (264.432074ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-341155 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-341155 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-b4g2m" [36d54158-d690-41a0-9828-29ecbd9fd510] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-b4g2m" [36d54158-d690-41a0-9828-29ecbd9fd510] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004541298s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "326.91031ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "68.401537ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "330.594819ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "64.453536ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdany-port1403821105/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714394781460530309" to /tmp/TestFunctionalparallelMountCmdany-port1403821105/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714394781460530309" to /tmp/TestFunctionalparallelMountCmdany-port1403821105/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714394781460530309" to /tmp/TestFunctionalparallelMountCmdany-port1403821105/001/test-1714394781460530309
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.063404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 12:46 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 12:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 12:46 test-1714394781460530309
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh cat /mount-9p/test-1714394781460530309
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-341155 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [686d51e3-412f-43b6-b5ce-394ff7f1e032] Pending
helpers_test.go:344: "busybox-mount" [686d51e3-412f-43b6-b5ce-394ff7f1e032] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [686d51e3-412f-43b6-b5ce-394ff7f1e032] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [686d51e3-412f-43b6-b5ce-394ff7f1e032] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004016109s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-341155 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdany-port1403821105/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdspecific-port2674008965/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.172178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdspecific-port2674008965/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh "sudo umount -f /mount-9p": exit status 1 (241.914044ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-341155 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdspecific-port2674008965/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3114706434/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3114706434/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3114706434/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T" /mount1: exit status 1 (249.31338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-341155 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3114706434/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3114706434/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3114706434/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 service list -o json
functional_test.go:1490: Took "360.03795ms" to run "out/minikube-linux-amd64 -p functional-341155 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.81:31412
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341155 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-341155
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-341155
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341155 image ls --format short --alsologtostderr:
I0429 12:47:01.474618  869899 out.go:291] Setting OutFile to fd 1 ...
I0429 12:47:01.474773  869899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.474786  869899 out.go:304] Setting ErrFile to fd 2...
I0429 12:47:01.474793  869899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.475148  869899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
I0429 12:47:01.478000  869899 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.478158  869899 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.478589  869899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.478655  869899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.495461  869899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41677
I0429 12:47:01.496032  869899 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.496735  869899 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.496792  869899 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.497167  869899 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.497372  869899 main.go:141] libmachine: (functional-341155) Calling .GetState
I0429 12:47:01.499457  869899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.499505  869899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.514638  869899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
I0429 12:47:01.515209  869899 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.515821  869899 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.515846  869899 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.516211  869899 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.516430  869899 main.go:141] libmachine: (functional-341155) Calling .DriverName
I0429 12:47:01.516681  869899 ssh_runner.go:195] Run: systemctl --version
I0429 12:47:01.516760  869899 main.go:141] libmachine: (functional-341155) Calling .GetSSHHostname
I0429 12:47:01.520426  869899 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.520913  869899 main.go:141] libmachine: (functional-341155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:3a:19", ip: ""} in network mk-functional-341155: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:45 +0000 UTC Type:0 Mac:52:54:00:62:3a:19 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:functional-341155 Clientid:01:52:54:00:62:3a:19}
I0429 12:47:01.520948  869899 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined IP address 192.168.39.81 and MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.521071  869899 main.go:141] libmachine: (functional-341155) Calling .GetSSHPort
I0429 12:47:01.521255  869899 main.go:141] libmachine: (functional-341155) Calling .GetSSHKeyPath
I0429 12:47:01.521387  869899 main.go:141] libmachine: (functional-341155) Calling .GetSSHUsername
I0429 12:47:01.521507  869899 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/functional-341155/id_rsa Username:docker}
I0429 12:47:01.618685  869899 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:47:01.701707  869899 main.go:141] libmachine: Making call to close driver server
I0429 12:47:01.701724  869899 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:01.702096  869899 main.go:141] libmachine: (functional-341155) DBG | Closing plugin on server side
I0429 12:47:01.702102  869899 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:01.702136  869899 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:47:01.702150  869899 main.go:141] libmachine: Making call to close driver server
I0429 12:47:01.702159  869899 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:01.702396  869899 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:01.702418  869899 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341155 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| localhost/minikube-local-cache-test     | functional-341155  | 0742a446528f8 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 7383c266ef252 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| gcr.io/google-containers/addon-resizer  | functional-341155  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341155 image ls --format table --alsologtostderr:
I0429 12:47:01.777154  870001 out.go:291] Setting OutFile to fd 1 ...
I0429 12:47:01.777303  870001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.777315  870001 out.go:304] Setting ErrFile to fd 2...
I0429 12:47:01.777322  870001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.777652  870001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
I0429 12:47:01.778492  870001 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.778639  870001 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.779142  870001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.779189  870001 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.796235  870001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
I0429 12:47:01.796738  870001 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.797463  870001 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.797485  870001 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.797926  870001 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.798159  870001 main.go:141] libmachine: (functional-341155) Calling .GetState
I0429 12:47:01.800345  870001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.800398  870001 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.817803  870001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
I0429 12:47:01.818272  870001 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.818828  870001 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.818860  870001 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.819182  870001 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.819424  870001 main.go:141] libmachine: (functional-341155) Calling .DriverName
I0429 12:47:01.819627  870001 ssh_runner.go:195] Run: systemctl --version
I0429 12:47:01.819659  870001 main.go:141] libmachine: (functional-341155) Calling .GetSSHHostname
I0429 12:47:01.822835  870001 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.823253  870001 main.go:141] libmachine: (functional-341155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:3a:19", ip: ""} in network mk-functional-341155: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:45 +0000 UTC Type:0 Mac:52:54:00:62:3a:19 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:functional-341155 Clientid:01:52:54:00:62:3a:19}
I0429 12:47:01.823287  870001 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined IP address 192.168.39.81 and MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.823446  870001 main.go:141] libmachine: (functional-341155) Calling .GetSSHPort
I0429 12:47:01.823629  870001 main.go:141] libmachine: (functional-341155) Calling .GetSSHKeyPath
I0429 12:47:01.823796  870001 main.go:141] libmachine: (functional-341155) Calling .GetSSHUsername
I0429 12:47:01.823959  870001 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/functional-341155/id_rsa Username:docker}
I0429 12:47:01.917693  870001 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:47:02.007309  870001 main.go:141] libmachine: Making call to close driver server
I0429 12:47:02.007381  870001 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:02.007755  870001 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:02.007777  870001 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:47:02.007797  870001 main.go:141] libmachine: Making call to close driver server
I0429 12:47:02.007807  870001 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:02.008080  870001 main.go:141] libmachine: (functional-341155) DBG | Closing plugin on server side
I0429 12:47:02.008129  870001 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:02.008144  870001 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341155 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"191760844"},{"id":"56cc51211
6c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0742a446528f8278cb8985339cb8c9b091314e985c0bffab3e5c520c7b7410aa","repoDigests":["localhost/minikube-local-cache-test@sha256:e88703adaa690f475f678e62290f324e1c44caf28a7533ffaa956bed7b8d2be2"],"repoTags":["localhost/minikube-local-cache-test:functional-341155"],"size":"3330"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"11
2170310"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags"
:["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k
8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-341155"],"size":"34114467"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","r
epoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341155 image ls --format json --alsologtostderr:
I0429 12:47:01.758105  869987 out.go:291] Setting OutFile to fd 1 ...
I0429 12:47:01.758306  869987 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.758319  869987 out.go:304] Setting ErrFile to fd 2...
I0429 12:47:01.758326  869987 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.758683  869987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
I0429 12:47:01.759666  869987 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.759832  869987 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.760549  869987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.760620  869987 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.778110  869987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
I0429 12:47:01.778630  869987 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.779318  869987 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.779347  869987 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.779821  869987 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.780052  869987 main.go:141] libmachine: (functional-341155) Calling .GetState
I0429 12:47:01.782304  869987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.782367  869987 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.798673  869987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34761
I0429 12:47:01.799111  869987 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.799788  869987 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.799826  869987 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.800217  869987 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.800395  869987 main.go:141] libmachine: (functional-341155) Calling .DriverName
I0429 12:47:01.800616  869987 ssh_runner.go:195] Run: systemctl --version
I0429 12:47:01.800640  869987 main.go:141] libmachine: (functional-341155) Calling .GetSSHHostname
I0429 12:47:01.803765  869987 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.804236  869987 main.go:141] libmachine: (functional-341155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:3a:19", ip: ""} in network mk-functional-341155: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:45 +0000 UTC Type:0 Mac:52:54:00:62:3a:19 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:functional-341155 Clientid:01:52:54:00:62:3a:19}
I0429 12:47:01.804272  869987 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined IP address 192.168.39.81 and MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.804453  869987 main.go:141] libmachine: (functional-341155) Calling .GetSSHPort
I0429 12:47:01.804652  869987 main.go:141] libmachine: (functional-341155) Calling .GetSSHKeyPath
I0429 12:47:01.804828  869987 main.go:141] libmachine: (functional-341155) Calling .GetSSHUsername
I0429 12:47:01.804990  869987 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/functional-341155/id_rsa Username:docker}
I0429 12:47:01.891770  869987 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:47:01.962481  869987 main.go:141] libmachine: Making call to close driver server
I0429 12:47:01.962515  869987 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:01.962883  869987 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:01.962905  869987 main.go:141] libmachine: (functional-341155) DBG | Closing plugin on server side
I0429 12:47:01.962910  869987 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:47:01.962943  869987 main.go:141] libmachine: Making call to close driver server
I0429 12:47:01.962952  869987 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:01.963210  869987 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:01.963226  869987 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341155 image ls --format yaml --alsologtostderr:
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-341155
size: "34114467"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "191760844"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0742a446528f8278cb8985339cb8c9b091314e985c0bffab3e5c520c7b7410aa
repoDigests:
- localhost/minikube-local-cache-test@sha256:e88703adaa690f475f678e62290f324e1c44caf28a7533ffaa956bed7b8d2be2
repoTags:
- localhost/minikube-local-cache-test:functional-341155
size: "3330"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341155 image ls --format yaml --alsologtostderr:
I0429 12:47:01.475217  869900 out.go:291] Setting OutFile to fd 1 ...
I0429 12:47:01.475347  869900 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.475375  869900 out.go:304] Setting ErrFile to fd 2...
I0429 12:47:01.475383  869900 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.475694  869900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
I0429 12:47:01.476471  869900 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.476619  869900 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.477158  869900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.477231  869900 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.493488  869900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45133
I0429 12:47:01.494009  869900 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.494691  869900 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.494724  869900 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.495144  869900 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.495461  869900 main.go:141] libmachine: (functional-341155) Calling .GetState
I0429 12:47:01.497738  869900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.497788  869900 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.513518  869900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
I0429 12:47:01.513949  869900 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.514547  869900 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.514572  869900 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.514913  869900 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.515105  869900 main.go:141] libmachine: (functional-341155) Calling .DriverName
I0429 12:47:01.515308  869900 ssh_runner.go:195] Run: systemctl --version
I0429 12:47:01.515346  869900 main.go:141] libmachine: (functional-341155) Calling .GetSSHHostname
I0429 12:47:01.519005  869900 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.519262  869900 main.go:141] libmachine: (functional-341155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:3a:19", ip: ""} in network mk-functional-341155: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:45 +0000 UTC Type:0 Mac:52:54:00:62:3a:19 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:functional-341155 Clientid:01:52:54:00:62:3a:19}
I0429 12:47:01.519290  869900 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined IP address 192.168.39.81 and MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.519424  869900 main.go:141] libmachine: (functional-341155) Calling .GetSSHPort
I0429 12:47:01.519641  869900 main.go:141] libmachine: (functional-341155) Calling .GetSSHKeyPath
I0429 12:47:01.519809  869900 main.go:141] libmachine: (functional-341155) Calling .GetSSHUsername
I0429 12:47:01.519952  869900 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/functional-341155/id_rsa Username:docker}
I0429 12:47:01.606282  869900 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:47:01.671242  869900 main.go:141] libmachine: Making call to close driver server
I0429 12:47:01.671263  869900 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:01.671629  869900 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:01.671669  869900 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:47:01.671686  869900 main.go:141] libmachine: Making call to close driver server
I0429 12:47:01.671699  869900 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:01.671967  869900 main.go:141] libmachine: (functional-341155) DBG | Closing plugin on server side
I0429 12:47:01.672006  869900 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:01.672022  869900 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341155 ssh pgrep buildkitd: exit status 1 (237.727925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image build -t localhost/my-image:functional-341155 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image build -t localhost/my-image:functional-341155 testdata/build --alsologtostderr: (1.816934218s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341155 image build -t localhost/my-image:functional-341155 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b1e0bf355c9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-341155
--> 4c371c93295
Successfully tagged localhost/my-image:functional-341155
4c371c93295a1821c3c58f49cc436f00c2c2f099a27bc9371accb6240cb4f9fc
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341155 image build -t localhost/my-image:functional-341155 testdata/build --alsologtostderr:
I0429 12:47:01.703576  869975 out.go:291] Setting OutFile to fd 1 ...
I0429 12:47:01.703816  869975 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.703825  869975 out.go:304] Setting ErrFile to fd 2...
I0429 12:47:01.703829  869975 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:47:01.704048  869975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
I0429 12:47:01.704817  869975 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.705447  869975 config.go:182] Loaded profile config "functional-341155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 12:47:01.705844  869975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.705883  869975 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.726317  869975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
I0429 12:47:01.727215  869975 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.728091  869975 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.728116  869975 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.728550  869975 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.728847  869975 main.go:141] libmachine: (functional-341155) Calling .GetState
I0429 12:47:01.730787  869975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 12:47:01.730834  869975 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:47:01.751164  869975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
I0429 12:47:01.751653  869975 main.go:141] libmachine: () Calling .GetVersion
I0429 12:47:01.752242  869975 main.go:141] libmachine: Using API Version  1
I0429 12:47:01.752274  869975 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:47:01.752613  869975 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:47:01.752797  869975 main.go:141] libmachine: (functional-341155) Calling .DriverName
I0429 12:47:01.753045  869975 ssh_runner.go:195] Run: systemctl --version
I0429 12:47:01.753077  869975 main.go:141] libmachine: (functional-341155) Calling .GetSSHHostname
I0429 12:47:01.756187  869975 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.756691  869975 main.go:141] libmachine: (functional-341155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:3a:19", ip: ""} in network mk-functional-341155: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:45 +0000 UTC Type:0 Mac:52:54:00:62:3a:19 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:functional-341155 Clientid:01:52:54:00:62:3a:19}
I0429 12:47:01.756723  869975 main.go:141] libmachine: (functional-341155) DBG | domain functional-341155 has defined IP address 192.168.39.81 and MAC address 52:54:00:62:3a:19 in network mk-functional-341155
I0429 12:47:01.756856  869975 main.go:141] libmachine: (functional-341155) Calling .GetSSHPort
I0429 12:47:01.757043  869975 main.go:141] libmachine: (functional-341155) Calling .GetSSHKeyPath
I0429 12:47:01.757179  869975 main.go:141] libmachine: (functional-341155) Calling .GetSSHUsername
I0429 12:47:01.757309  869975 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/functional-341155/id_rsa Username:docker}
I0429 12:47:01.847200  869975 build_images.go:161] Building image from path: /tmp/build.1943634718.tar
I0429 12:47:01.847271  869975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 12:47:01.860779  869975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1943634718.tar
I0429 12:47:01.866517  869975 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1943634718.tar: stat -c "%s %y" /var/lib/minikube/build/build.1943634718.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1943634718.tar': No such file or directory
I0429 12:47:01.866561  869975 ssh_runner.go:362] scp /tmp/build.1943634718.tar --> /var/lib/minikube/build/build.1943634718.tar (3072 bytes)
I0429 12:47:01.898438  869975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1943634718
I0429 12:47:01.914247  869975 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1943634718 -xf /var/lib/minikube/build/build.1943634718.tar
I0429 12:47:01.940384  869975 crio.go:315] Building image: /var/lib/minikube/build/build.1943634718
I0429 12:47:01.940480  869975 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-341155 /var/lib/minikube/build/build.1943634718 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0429 12:47:03.427226  869975 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-341155 /var/lib/minikube/build/build.1943634718 --cgroup-manager=cgroupfs: (1.486718617s)
I0429 12:47:03.427312  869975 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1943634718
I0429 12:47:03.438431  869975 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1943634718.tar
I0429 12:47:03.448713  869975 build_images.go:217] Built localhost/my-image:functional-341155 from /tmp/build.1943634718.tar
I0429 12:47:03.448757  869975 build_images.go:133] succeeded building to: functional-341155
I0429 12:47:03.448762  869975 build_images.go:134] failed building to: 
I0429 12:47:03.448790  869975 main.go:141] libmachine: Making call to close driver server
I0429 12:47:03.448798  869975 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:03.449129  869975 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:03.449157  869975 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:47:03.449165  869975 main.go:141] libmachine: Making call to close driver server
I0429 12:47:03.449173  869975 main.go:141] libmachine: (functional-341155) Calling .Close
I0429 12:47:03.449172  869975 main.go:141] libmachine: (functional-341155) DBG | Closing plugin on server side
I0429 12:47:03.449424  869975 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:47:03.449441  869975 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:47:03.449457  869975 main.go:141] libmachine: (functional-341155) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-341155
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.81:31412
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image load --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image load --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr: (5.773838288s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image load --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image load --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr: (4.079814482s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-341155
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image load --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image load --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr: (6.590442424s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image save gcr.io/google-containers/addon-resizer:functional-341155 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image save gcr.io/google-containers/addon-resizer:functional-341155 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.775128083s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.881414082s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-341155
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-341155 image save --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr
2024/04/29 12:46:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-341155 image save --daemon gcr.io/google-containers/addon-resizer:functional-341155 --alsologtostderr: (2.31148362s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-341155
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.35s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-341155
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-341155
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-341155
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-212075 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-212075 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.349517888s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-212075 -- rollout status deployment/busybox: (2.572912563s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-9q8rf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-rcq9m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-xw452 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-9q8rf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-rcq9m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-xw452 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-9q8rf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-rcq9m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-xw452 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-9q8rf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-9q8rf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-rcq9m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-rcq9m -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-xw452 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-212075 -- exec busybox-fc5497c4f-xw452 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-212075 -v=7 --alsologtostderr
E0429 12:51:19.252802  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.258860  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.269017  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.289391  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.329806  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.410225  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.570703  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:19.891023  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:20.531690  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:21.812069  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 12:51:24.373172  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-212075 -v=7 --alsologtostderr: (47.095218544s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-212075 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status --output json -v=7 --alsologtostderr
E0429 12:51:29.493866  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp testdata/cp-test.txt ha-212075:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075:/home/docker/cp-test.txt ha-212075-m02:/home/docker/cp-test_ha-212075_ha-212075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test_ha-212075_ha-212075-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075:/home/docker/cp-test.txt ha-212075-m03:/home/docker/cp-test_ha-212075_ha-212075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test_ha-212075_ha-212075-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075:/home/docker/cp-test.txt ha-212075-m04:/home/docker/cp-test_ha-212075_ha-212075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test_ha-212075_ha-212075-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp testdata/cp-test.txt ha-212075-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m02:/home/docker/cp-test.txt ha-212075:/home/docker/cp-test_ha-212075-m02_ha-212075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test_ha-212075-m02_ha-212075.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m02:/home/docker/cp-test.txt ha-212075-m03:/home/docker/cp-test_ha-212075-m02_ha-212075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test_ha-212075-m02_ha-212075-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m02:/home/docker/cp-test.txt ha-212075-m04:/home/docker/cp-test_ha-212075-m02_ha-212075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test_ha-212075-m02_ha-212075-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp testdata/cp-test.txt ha-212075-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt ha-212075:/home/docker/cp-test_ha-212075-m03_ha-212075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test_ha-212075-m03_ha-212075.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt ha-212075-m02:/home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test_ha-212075-m03_ha-212075-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m03:/home/docker/cp-test.txt ha-212075-m04:/home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt
E0429 12:51:39.734576  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test_ha-212075-m03_ha-212075-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp testdata/cp-test.txt ha-212075-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1860612890/001/cp-test_ha-212075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt ha-212075:/home/docker/cp-test_ha-212075-m04_ha-212075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075 "sudo cat /home/docker/cp-test_ha-212075-m04_ha-212075.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt ha-212075-m02:/home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m02 "sudo cat /home/docker/cp-test_ha-212075-m04_ha-212075-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 cp ha-212075-m04:/home/docker/cp-test.txt ha-212075-m03:/home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 ssh -n ha-212075-m03 "sudo cat /home/docker/cp-test_ha-212075-m04_ha-212075-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.513909315s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-212075 node delete m03 -v=7 --alsologtostderr: (16.753622598s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (365.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-212075 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0429 13:06:19.253168  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 13:07:42.300616  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-212075 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m4.411235709s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (365.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-212075 --control-plane -v=7 --alsologtostderr
E0429 13:11:19.253674  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-212075 --control-plane -v=7 --alsologtostderr: (1m11.022366804s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-amd64 -p ha-212075 status -v=7 --alsologtostderr: (1.007419159s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-625579 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-625579 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.043804937s)
--- PASS: TestJSONOutput/start/Command (99.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-625579 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-625579 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-625579 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-625579 --output=json --user=testUser: (7.434789957s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-194852 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-194852 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.087353ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bab8456a-5cdf-4568-a34d-e1e8551c040d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-194852] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7b7bdca-7d29-446e-a8e3-1067c06da299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18773"}}
	{"specversion":"1.0","id":"6c62c5b1-61a7-446b-9cf4-e3aaa3c88833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d234033-1a22-4fe8-8e92-a735c4932aeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig"}}
	{"specversion":"1.0","id":"9c83bdbe-265a-4bc9-b69d-de3a5d052236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube"}}
	{"specversion":"1.0","id":"520ef6df-d7ad-43fd-aa0c-8f1eb1926762","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0234fdea-3880-47a3-bbe6-5e06844aa0cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e273fd74-9d0b-45f3-82cf-82eba1a0c7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-194852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-194852
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (96.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-246140 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-246140 --driver=kvm2  --container-runtime=crio: (48.03911174s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-249025 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-249025 --driver=kvm2  --container-runtime=crio: (45.53469162s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-246140
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-249025
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-249025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-249025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-249025: (1.041483557s)
helpers_test.go:175: Cleaning up "first-246140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-246140
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-246140: (1.079882385s)
--- PASS: TestMinikubeProfile (96.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-964213 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-964213 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.969892826s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-964213 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-964213 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-982342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0429 13:16:19.253650  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-982342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.116292959s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-982342 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-982342 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.94s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-964213 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-982342 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-982342 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-982342
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-982342: (1.389302736s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-982342
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-982342: (23.767712082s)
--- PASS: TestMountStart/serial/RestartStopped (24.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-982342 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-982342 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404116 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404116 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.098619503s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-404116 -- rollout status deployment/busybox: (2.326440639s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-k79pb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-qv47r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-k79pb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-qv47r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-k79pb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-qv47r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-k79pb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-k79pb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-qv47r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404116 -- exec busybox-fc5497c4f-qv47r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-404116 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-404116 -v 3 --alsologtostderr: (43.357947736s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-404116 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp testdata/cp-test.txt multinode-404116:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile403422532/001/cp-test_multinode-404116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116:/home/docker/cp-test.txt multinode-404116-m02:/home/docker/cp-test_multinode-404116_multinode-404116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m02 "sudo cat /home/docker/cp-test_multinode-404116_multinode-404116-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116:/home/docker/cp-test.txt multinode-404116-m03:/home/docker/cp-test_multinode-404116_multinode-404116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m03 "sudo cat /home/docker/cp-test_multinode-404116_multinode-404116-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp testdata/cp-test.txt multinode-404116-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile403422532/001/cp-test_multinode-404116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt multinode-404116:/home/docker/cp-test_multinode-404116-m02_multinode-404116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116 "sudo cat /home/docker/cp-test_multinode-404116-m02_multinode-404116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116-m02:/home/docker/cp-test.txt multinode-404116-m03:/home/docker/cp-test_multinode-404116-m02_multinode-404116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m03 "sudo cat /home/docker/cp-test_multinode-404116-m02_multinode-404116-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp testdata/cp-test.txt multinode-404116-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile403422532/001/cp-test_multinode-404116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt multinode-404116:/home/docker/cp-test_multinode-404116-m03_multinode-404116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116 "sudo cat /home/docker/cp-test_multinode-404116-m03_multinode-404116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 cp multinode-404116-m03:/home/docker/cp-test.txt multinode-404116-m02:/home/docker/cp-test_multinode-404116-m03_multinode-404116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 ssh -n multinode-404116-m02 "sudo cat /home/docker/cp-test_multinode-404116-m03_multinode-404116-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-404116 node stop m03: (1.583275072s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404116 status: exit status 7 (468.556714ms)

                                                
                                                
-- stdout --
	multinode-404116
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-404116-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-404116-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404116 status --alsologtostderr: exit status 7 (485.705678ms)

                                                
                                                
-- stdout --
	multinode-404116
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-404116-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-404116-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:19:37.532734  887964 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:19:37.532861  887964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:19:37.532866  887964 out.go:304] Setting ErrFile to fd 2...
	I0429 13:19:37.532870  887964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:19:37.533107  887964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-847310/.minikube/bin
	I0429 13:19:37.533289  887964 out.go:298] Setting JSON to false
	I0429 13:19:37.533318  887964 mustload.go:65] Loading cluster: multinode-404116
	I0429 13:19:37.533403  887964 notify.go:220] Checking for updates...
	I0429 13:19:37.533755  887964 config.go:182] Loaded profile config "multinode-404116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 13:19:37.533771  887964 status.go:255] checking status of multinode-404116 ...
	I0429 13:19:37.534179  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.534241  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.555742  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0429 13:19:37.556532  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.557192  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.557219  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.557597  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.557883  887964 main.go:141] libmachine: (multinode-404116) Calling .GetState
	I0429 13:19:37.559761  887964 status.go:330] multinode-404116 host status = "Running" (err=<nil>)
	I0429 13:19:37.559783  887964 host.go:66] Checking if "multinode-404116" exists ...
	I0429 13:19:37.560252  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.560325  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.578678  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34749
	I0429 13:19:37.579192  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.579844  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.579872  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.580314  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.580545  887964 main.go:141] libmachine: (multinode-404116) Calling .GetIP
	I0429 13:19:37.583799  887964 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:19:37.584347  887964 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:19:37.584389  887964 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:19:37.584604  887964 host.go:66] Checking if "multinode-404116" exists ...
	I0429 13:19:37.584975  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.585037  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.603029  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46851
	I0429 13:19:37.603678  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.604320  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.604349  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.604726  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.604942  887964 main.go:141] libmachine: (multinode-404116) Calling .DriverName
	I0429 13:19:37.605137  887964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:19:37.605172  887964 main.go:141] libmachine: (multinode-404116) Calling .GetSSHHostname
	I0429 13:19:37.608153  887964 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:19:37.608661  887964 main.go:141] libmachine: (multinode-404116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:13", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:17:09 +0000 UTC Type:0 Mac:52:54:00:80:60:13 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-404116 Clientid:01:52:54:00:80:60:13}
	I0429 13:19:37.608690  887964 main.go:141] libmachine: (multinode-404116) DBG | domain multinode-404116 has defined IP address 192.168.39.179 and MAC address 52:54:00:80:60:13 in network mk-multinode-404116
	I0429 13:19:37.609001  887964 main.go:141] libmachine: (multinode-404116) Calling .GetSSHPort
	I0429 13:19:37.609270  887964 main.go:141] libmachine: (multinode-404116) Calling .GetSSHKeyPath
	I0429 13:19:37.609433  887964 main.go:141] libmachine: (multinode-404116) Calling .GetSSHUsername
	I0429 13:19:37.609594  887964 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116/id_rsa Username:docker}
	I0429 13:19:37.696341  887964 ssh_runner.go:195] Run: systemctl --version
	I0429 13:19:37.704375  887964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:19:37.721298  887964 kubeconfig.go:125] found "multinode-404116" server: "https://192.168.39.179:8443"
	I0429 13:19:37.721335  887964 api_server.go:166] Checking apiserver status ...
	I0429 13:19:37.721371  887964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:19:37.739289  887964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0429 13:19:37.753070  887964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:19:37.753144  887964 ssh_runner.go:195] Run: ls
	I0429 13:19:37.758687  887964 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I0429 13:19:37.763850  887964 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I0429 13:19:37.763884  887964 status.go:422] multinode-404116 apiserver status = Running (err=<nil>)
	I0429 13:19:37.763897  887964 status.go:257] multinode-404116 status: &{Name:multinode-404116 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:19:37.763939  887964 status.go:255] checking status of multinode-404116-m02 ...
	I0429 13:19:37.764367  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.764406  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.780525  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0429 13:19:37.780958  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.781441  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.781465  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.781886  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.782107  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .GetState
	I0429 13:19:37.783973  887964 status.go:330] multinode-404116-m02 host status = "Running" (err=<nil>)
	I0429 13:19:37.784004  887964 host.go:66] Checking if "multinode-404116-m02" exists ...
	I0429 13:19:37.784359  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.784402  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.800915  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0429 13:19:37.801375  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.801862  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.801884  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.802280  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.802510  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .GetIP
	I0429 13:19:37.805571  887964 main.go:141] libmachine: (multinode-404116-m02) DBG | domain multinode-404116-m02 has defined MAC address 52:54:00:5d:2e:64 in network mk-multinode-404116
	I0429 13:19:37.805991  887964 main.go:141] libmachine: (multinode-404116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:2e:64", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:18:12 +0000 UTC Type:0 Mac:52:54:00:5d:2e:64 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:multinode-404116-m02 Clientid:01:52:54:00:5d:2e:64}
	I0429 13:19:37.806028  887964 main.go:141] libmachine: (multinode-404116-m02) DBG | domain multinode-404116-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:5d:2e:64 in network mk-multinode-404116
	I0429 13:19:37.806143  887964 host.go:66] Checking if "multinode-404116-m02" exists ...
	I0429 13:19:37.806460  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.806488  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.823535  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0429 13:19:37.823983  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.824490  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.824514  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.824861  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.825086  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .DriverName
	I0429 13:19:37.825293  887964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:19:37.825316  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .GetSSHHostname
	I0429 13:19:37.828212  887964 main.go:141] libmachine: (multinode-404116-m02) DBG | domain multinode-404116-m02 has defined MAC address 52:54:00:5d:2e:64 in network mk-multinode-404116
	I0429 13:19:37.828644  887964 main.go:141] libmachine: (multinode-404116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:2e:64", ip: ""} in network mk-multinode-404116: {Iface:virbr1 ExpiryTime:2024-04-29 14:18:12 +0000 UTC Type:0 Mac:52:54:00:5d:2e:64 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:multinode-404116-m02 Clientid:01:52:54:00:5d:2e:64}
	I0429 13:19:37.828673  887964 main.go:141] libmachine: (multinode-404116-m02) DBG | domain multinode-404116-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:5d:2e:64 in network mk-multinode-404116
	I0429 13:19:37.828856  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .GetSSHPort
	I0429 13:19:37.829075  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .GetSSHKeyPath
	I0429 13:19:37.829218  887964 main.go:141] libmachine: (multinode-404116-m02) Calling .GetSSHUsername
	I0429 13:19:37.829350  887964 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-847310/.minikube/machines/multinode-404116-m02/id_rsa Username:docker}
	I0429 13:19:37.911513  887964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:19:37.928836  887964 status.go:257] multinode-404116-m02 status: &{Name:multinode-404116-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:19:37.928878  887964 status.go:255] checking status of multinode-404116-m03 ...
	I0429 13:19:37.929243  887964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 13:19:37.929338  887964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:19:37.947078  887964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0429 13:19:37.947619  887964 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:19:37.948166  887964 main.go:141] libmachine: Using API Version  1
	I0429 13:19:37.948195  887964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:19:37.948534  887964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:19:37.948672  887964 main.go:141] libmachine: (multinode-404116-m03) Calling .GetState
	I0429 13:19:37.950337  887964 status.go:330] multinode-404116-m03 host status = "Stopped" (err=<nil>)
	I0429 13:19:37.950359  887964 status.go:343] host is not running, skipping remaining checks
	I0429 13:19:37.950368  887964 status.go:257] multinode-404116-m03 status: &{Name:multinode-404116-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-404116 node start m03 -v=7 --alsologtostderr: (28.816055523s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-404116 node delete m03: (1.876402363s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (170.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404116 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404116 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m50.342715254s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404116 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (170.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-404116
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404116-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-404116-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.764506ms)

                                                
                                                
-- stdout --
	* [multinode-404116-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-404116-m02' is duplicated with machine name 'multinode-404116-m02' in profile 'multinode-404116'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404116-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404116-m03 --driver=kvm2  --container-runtime=crio: (46.308380369s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-404116
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-404116: exit status 80 (257.538567ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-404116 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-404116-m03 already exists in multinode-404116-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-404116-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-404116-m03: (1.049866489s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.77s)

                                                
                                    
x
+
TestScheduledStopUnix (116.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-037833 --memory=2048 --driver=kvm2  --container-runtime=crio
E0429 13:36:19.253567  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-037833 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.448044862s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-037833 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-037833 -n scheduled-stop-037833
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-037833 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-037833 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-037833 -n scheduled-stop-037833
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-037833
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-037833 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-037833
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-037833: exit status 7 (96.975175ms)

                                                
                                                
-- stdout --
	scheduled-stop-037833
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-037833 -n scheduled-stop-037833
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-037833 -n scheduled-stop-037833: exit status 7 (88.481105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-037833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-037833
--- PASS: TestScheduledStopUnix (116.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (178.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1727836644 start -p running-upgrade-396169 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1727836644 start -p running-upgrade-396169 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m9.943487268s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-396169 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0429 13:41:02.302350  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
E0429 13:41:19.253908  854660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-847310/.minikube/profiles/functional-341155/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-396169 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m46.952089658s)
helpers_test.go:175: Cleaning up "running-upgrade-396169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-396169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-396169: (1.131235179s)
--- PASS: TestRunningBinaryUpgrade (178.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492236 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-492236 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (137.542241ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-492236] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-847310/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-847310/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (129.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492236 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492236 --driver=kvm2  --container-runtime=crio: (2m9.689055118s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-492236 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (129.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492236 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492236 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.675860185s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-492236 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-492236 status -o json: exit status 2 (300.079697ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-492236","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-492236
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-492236: (1.130741846s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492236 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492236 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.433590722s)
--- PASS: TestNoKubernetes/serial/Start (49.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-492236 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-492236 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.095256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-492236
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-492236: (2.320717658s)
--- PASS: TestNoKubernetes/serial/Stop (2.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492236 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492236 --driver=kvm2  --container-runtime=crio: (21.777374153s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-492236 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-492236 "sudo systemctl is-active --quiet service kubelet": exit status 1 (239.58715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.305560797 start -p stopped-upgrade-238527 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.305560797 start -p stopped-upgrade-238527 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.908019349s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.305560797 -p stopped-upgrade-238527 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.305560797 -p stopped-upgrade-238527 stop: (12.18935665s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-238527 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-238527 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.334130877s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.43s)

                                                
                                    
x
+
TestPause/serial/Start (90.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-553639 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-553639 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m30.817736714s)
--- PASS: TestPause/serial/Start (90.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-238527
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-238527: (1.095266321s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard